Get a snapshot from a videoinput object buffer. Streaming has to be enabled before calling getsnapshot. If preview==true the captured image is also shown in a separate FLTK window.
Captured image. The type and size of img depends on the VideoFormat property of vi.
H and W below refers to the height and width returned from get(VI, "VideoResolution").
RGB3, RGB24HxWx3 uint8 matrix with RGB values
YUYV, YUV422scalar struct with fieldnames Y, Cb and Cr. The horizontal resolution of Cb and Cr is half the horizontal resolution of Y.
YV12, YVU420, YU12, YUV420scalar struct with fieldnames Y, Cb and Cr. The horizontal and vertical resolution of Cb and Cr is half the resolution of Y.
MJPG, MJPEGuint8 row vector with compressed MJPEG data. The length may vary from frame to frame due to compression. You can save this as JPEG (add a huffman table) with
    obj = videoinput("v4l2", "/dev/video0");
    set (obj, "VideoFormat", "MJPG");
    start (obj);
    img = getsnapshot (obj);
    save_mjpeg_as_jpg ("capture.jpg", img);
Set by the driver, counting the frames (not fields!) in sequence.
For input streams this is time when the first data byte was captured, as returned by the clock_gettime() function for the relevant clock id.
seconds
microseconds
Timecode, see https://www.kernel.org/doc/html/v6.1/userspace-api/media/v4l/buffer.html#c.V4L.v4l2_timecode
The following code
 obj = videoinput (__test__device__{:});
 fmts = {set(obj,"VideoFormat").fourcc}
 set (obj, "VideoFormat", fmts{1})
 start (obj)
 img = getsnapshot (obj);
 image (img)
 title (fmts{1})
 stop (obj)
Produces the following output
fmts =
{
  [1,1] = YUYV
  [1,2] = MJPG
  [1,3] = RGB3
  [1,4] = BGR3
  [1,5] = YU12
  [1,6] = YV12
}
and the following figure
| Figure 1 | 
|---|
|  | 
Package: image-acquisition