Machine Vision Glossary
1
- 1394a
400Mbit/s capable
- 1394b
800Mbit/s capable
A
- Absolute Values
Real-world values, such as milliseconds (ms), decibels (dB) or percent (%). Using the absolute values is easier and more efficient than applying complex conversion formulas to integer values.
- Analog-to-Digital Converter
Often abbreviated as ADC or A/D converted, it is a device that converts a voltage to a digital number.
- API
Application Programming Interface. Essentially a library of software functions.
- Application-specific Vision System
A turnkey vision system addressing a single specific application (e.g. wafer inspection in the semiconductor industry or solutions of system integrators).The primary product function is performed by vision technology. Any single component of the vision system taken on its own has no value to the customer.
- Asynchronous Transfer
Asynchronous transfers, unlike isochronous transfers, do not guarantee when data will be transferred. Asynchronous transfers do guarantee that data will arrive as sent. We use asynchronous transfers when data integrity is a higher priority than speed. An example might be an image data transfer to a printer, where speed is less critical than getting the image pixels correct. Asynchronous transfers are initiated from a single node, designated the ‘requestor’, to or from the address space of another node, designated the ‘responder’. Asynchronous requests are packet based. The requestor node generates a request packet that the 1394 bus sends to the responder node. The responder node is responsible for handling the request packet and creating a response packet that is sent back to the requestor node to complete a single transfer. There are three types of 1394 asynchronous transfers: Read, Write and Lock.
B
- Bi-telecentric (lens)
Is a lens which is both telecentric in the object and in the image space. This means that the both the rays entering the lens and the rays exiting the lens to form an image on the detector are close to be perfectly parallel to the main optical and mechanical axis of the lens itself.
- BPP
Bytes per packet. An image is broken into multiple packets of data, which are then streamed isochronously to the host system. Each packet is made up of multiple bytes of data.
- Brightness (%)
This is essentially the level of black in an image. A high brightness will result in a low amount of black in the image. In the absence of noise, the minimum pixel value in an image acquired with a brightness setting of 1% should be 1% of the A/D converter’s minimum value.
C
- Camera
A device converting optical radiation into analog or digital data, which is housed in a body, that allows to precisely connect standard optics and cabling. Cameras differ e.g. in the type of sensor technology used (CCD or CMOS), sensor geometry (Area-Scan or Line-Scan), number of pixels, dynamic range (8 bits – 12 bits per pixel), data rate, data output (analog or digital), data interface (e.g. CCIR, RS 170, Camera Link, IEEE 1394, Gigabit Ethernet).
- Config ROM
Configuration read-only memory. A section of memory dedicated to describing low-level device characteristics such as Model and Vendor ID, DCAM version compliance, base address quadlet offsets, etc.
- Configurable Vision System
A vision system which can be used for different of applications (e.g. optical character recognition, dimensionial measuring) in various industries or environments. The required application can be implemented by the end-user without writing a source code, e.g. with the help of a graphical user interface. Typical characteristics of Configurable Vision Systems are: scalability, flexibility and often based on PC-technology.
D
- DCAM
Abbreviation for the IIDC 1394-based Digital Camera (DCAM) Specification, which is the standard used for building FireWire-based cameras. As a general rule, PGR IEEE-1394 cameras conform to the IIDC 1394-based Digital Camera Specification v1.31. Check your camera’s Technical Reference or Getting Started Manual for specific DCAM compliance. The DCAM specification can be purchased from the 1394 Trade Association (http://www.1394ta.org/).
E
- Exposure (EV)
This is the average intensity of the image. It will use other available (non-manually adjustable) controls to adjust the image.
F
- Firmware
Programming that is inserted into programmable read-only memory, thus becoming a permanent part of a computing device. Firmware is created and tested like software and can be loaded onto the camera.
- Format_7
Encompasses partial or custom image video formats and modes, such as region of interest of pixel binned modes. Format_7 modes and frame rates are defined by the camera manufacturer, as opposed to the DCAM specification.
- FPS
Frames per second.
- Frame Grabber
A plug-in board that includes e.g. Analog-to-Digital Converters, Look-Up-Tables, memory to store one or more frames, Digital-to-Analog Converters. Active Frame Grabbers carry onboard processing units (standard microprocessor, DPS, RISC processor or FPGA) to process the image data, whereas passive frame grabbers condition the image data out of a camera making it compatible with processing by a computer and perform a restricted set of pre-processing functions. Frame Grabbers can operate with either digital or analog cameras.
- Frame Rate
Often defined in terms of number of frames per second (FPS) or frequency (Hz). This is the speed at which the camera is streaming images to the host system. It basically defines the interval between consecutive image transfers.
G
- Gain (dB)
The amount of amplification that is applied to a pixel. An increase in gain can result in a brighter image and an increase in noise.
- Gamma
Gamma defines the function between incoming light level and output picture level. Gamma can be useful in emphasizing details in the darkest and/or brightest regions of the image.
- GPIO
General Purpose Input/Output.
- Grabbing Images
A commonly used phrase to refer to the process of enabling isochronous transfers on a camera, which allows image data to be streamed from the camera to the host system.
H
- Hyper-pericentric (lens)
Is a lens to look at the internal surfaces of round or cylindric objects. Like in Pericentric Lenses, rays are converging towards the object. The entrance pupil of the lens is located outside the lens itself, and beyond the object.
I
- Isochronous Transfer
Isochronous transfers on the 1394 bus guarantee timely delivery of data. Specifically, isochronous transfers are scheduled by the bus so that they occur once every 125µs. Each 125µs timeslot on the bus is called a frame. Isochronous transfers, unlike asynchronous transfers, do not in any way guarantee the integrity of data through a transfer. No response packet is sent for an isochronous transfer. Isochronous transfers are useful for situations that require a constant data rate but not necessarily data integrity. Examples include video or audio data transfers. Isochronous transfers on the 1394 bus do not target a specific node. Isochronous transfers are broadcast transfers which use channel numbers to determine destination.
L
- Lighting
A device or set of devices illuminating the scene according to the needs of the very application. Lighting systems differ from each other regarding e.g. light source (e.g. halogen bulb, tungsten lamp, LED or Laser), geometry (line light, panel), operational mode (continuous wave or flashed) and light forming optics.
M
- Machine vision
Machine vision refers the industrial application of vision technology. It describes the understanding and interpretation of technically obtained images for controlling production processes. It has evolved into one of the key technologies in industrial automation, which is used in virtually all manufacturing industries.
N
- Node
An addressable device attached to a bus. Although multiple nodes may be present within the same physical enclosure (module), each has its own bus interface and address space and may be reset independently of the others.
- Node ID
A 16-bit number that uniquely differentiates a node from all other nodes within a group of interconnected buses. Although the structure of the node ID is bus-dependent, it usually consists of a bus ID portion and a local ID portion. The most significant bits of node ID are the same for all nodes on the same bus; this is the bus ID. The least-significant bits of node ID are unique for each node on the same bus; this is called the local ID. The local ID may be assigned as a consequence of bus initialization.
O
- Optics
A device or set of devices capturing the optical radiation reflected from the scene and projecting a sharp image of this scene on the imaging device in the camera. Standard optics (i.e. lenses) are normally mounted to the camera body. Additional custom optics may be put in front of a standard lens and vary with regard to the application.
P
- Pericentric (lens)
Is a lens used to look at the external surfaces of round or cylindric objects. Rays, instead of being diverging from the lens (as in common, entocentric lenses) or parallel (as in telecentric lenses), are converging towards the object; this is made possible by the positioning of the entrance pupil of the lens, which is located outside the lens itself, between the object and the front element of the objective.
- PHY
Physical layer. Each 1394 PHY provides the interface to the 1394 bus and performs key functions in the communications process, such as bus configuration, speed signaling and detecting transfer speed, 1394 bus control arbitration, and others.
Q
- Quadlet
A 4 byte (32-bit) value.
- Quadlet Offset
The number of quadlets separating a base address and the desired CSR address. For example, if the base address is 0xFFFFF0F00000 and the value of the quadlet offset is 0x100, then the actual address offset is 0x400 and the actual adress 0xFFFFF0F00400.
R
- Register
A term used to describe quadlet-aligned addresses that may be read or written by bus transactions.
S
- Saturation
This is how far a color is from a gray image of the same intensity. For example, red is highly saturated, whereas a pale pink is not.
- SDK
Software Development Kit
- Sharpness
This works by filtering the image to reduce blurred edges.
- Shutter (ms)
This is the amount of time that the camera’s electronic shutter stays open for; also known as the exposure or integration time. The shutter time defines the start and end point of when light falls on the imaging sensor. At the end of the exposure period, all charges are simultaneously transferred to light-shielded areas of the sensor. The charges are then shifted out of the light shielded areas of the sensor and read out.
- Signal-to-Noise Ratio (dB)
The difference between the ideal signal that you expect and the real-world signal that you actually see is usually called noise. The relationship between signal and noise is called the signal-to-nose ratio (SNR). SNR is calculated using the general methodology outlined in KB Article 142.
- Smart Camera (Intelligent Camera, Compact System)
A camera with embedded intelligence, such as a microprocessor, DSP or FPGA, which can be programmed/parameterized. The Smart Camera can be used for different applications. The required application can be implemented by the end-user either by writing source-code or by parameterizing (e.g. with the help of a graphic user interface). Designs with a remote head are also included in this category. Typical characteristics are: compactness, fixed hardware-configuration and often based on embedded technology.
- SXGA
1280x1024 pixel resolution
U
- UXGA
1600x1200 pixel resolution
V
- VGA
640x480 pixel resolution
- Vision Sensor
A turnkey product based on an image sensor combined with a processor unit integrated in a body and equipped with specific application software. Typically optics and lighting are already integrated. The application is destined for a specific task (e.g. code reading).
- Vision Software
Either a generic software library that can be adapted to many different applications or a dedicated software tool for specific applications (e.g. optical character recognition, robot guidance, surface inspection, dimensional measuring).
- Vision technology
is still a relatively young discipline, which had its breakthrough in the early 1980s. It deals with images or sequences of images with the objective of manipulating and analysing them in order to a) improve image quality (contrast, colour, etc.), b) restore images (e.g. noise reduction), c) code pictures (data compression, for example) or d) understand and interpret images (image analysis, pattern recognition). Thus vision technology can be applied wherever images are generated and need to be analysed: in biology (counting cells), in medicine (interpreting CT scanning results), in the construction industry (thermographic analysis of buildings) or in security (verification of biometric dimensions). Vision technology is an interdisciplinary technology that combines lighting, optics, electronics, information technology, software and automation technology.
X
- XVGA
1024x768 pixel resolution | |