http://jwhsmith.net/2014/12/capturing-a-webcam-stream-using-v4l2/
CAPTURING A WEBCAM STREAM USING V4L2
A few months ago, I came across a blog post dating back to 2013, which described the basics of v4l2, and how to capture a video frame from a camera in Linux. However, this article was missing a few pieces, and since practical (and simple) examples of v4l2 examples are really rare online, I thought I’d publish an article about it.
What’s v4l2?
v4l2 stands for Video For Linux 2, the second version of the V4L API and framework. As opposed to many driver implementations, the v4l2 framework is made an integral part of the Linux kernel code. This static integration has been criticised by pro-BSDs, and several analogue projects were created for V4L2 on BSD (such as Video4BSD), however nothing came to an end (yet). The V4L2 API allows you to manipulate various video devices, for capture as well as for output. The API is also capable of handling other kind of devices such as TV tuners, but we’ll stick to webcams here.
This API is mostly implemented as a set of IOCTL calls for you to make to your video devices. Once you’ve understood the general mechanism, and know a few IOCTLs, you’ll be able to manipulate your camera with a certain ease.
Common implementation of a v4l2 application
Aside from the parts strictly related to device communication, v4l2 expects you to rely on a few other system calls. In this article, we’ll go through the following steps:
- Open a descriptor to the device. This is done UNIX-style, basic I/O.
- Retrieve and analyse the device’s capabilities. V4L2 allows you to query a device for its capabilities, that is, the set of operations (roughly, IOCTL calls) it supports. I’ll give a little bit more details about that later.
- Set the capture format. This is where you choose your frame size, your format (MJPEG, RGB, YUV, …), and so on. Again, the device must be able to handle your format. There is an IOCTL call which allows to retrieve a list of available formats (which are independent from the device’s capabilities), I’ll give you a little example.
- Prepare the device for buffer handling. When capturing a frame, you have to submit a buffer to the device (queue), and retrieve it once it’s been filled with data (dequeue). However, before you can do this, you must inform the device about your buffers (buffer request).
- For each buffer you wish to use, you must negotiate characteristics with the device (buffer size, frame start offset in memory), and create a new memory mapping for it.
- Put the device into streaming mode.
- Once your buffers are ready, all you have to do is keep queueing/dequeuing your buffers repeatedly, and every call will bring you a new frame. The delay you set between each frames by putting your program to sleep is what determines your FPS (frames per second) rate.
- Turn off streaming mode.
- Close your descriptor to the device.
Note (see comments for more information) : depending on your device, this routine might not work for you. In some cases, devices cannot be put into streaming mode if no buffer is queued. In this case, you’ll have to queue a buffer, switch streaming on, dequeue/queue in a loop, and switch streaming off. More information about this will be given further down.
Each of these steps is covered by a system calls or a set of IOCTL calls. However, first things first, you need to know how to make an IOCTL call to a device. Consider a descriptor stored in fd, you may use the ioctl system call as follows:
1
|
ioctl(fd, MY_REQUEST, arg1, ...);
|
- MY_REQUEST is your IOCTL call. It’s a integer, and V4L2 provides you with constants which map these numbers to readable forms. For example, VIDIOC_QUERYCAP is used to retrieve the device’s capabilities.
- Depending on the request you’re submitting, you may need to pass additional parameters along. In most cases, you have to submit the address of a data structure through which you’ll be able to read the result of your query. The above VIDIOC_QUERYCAP requires one parameter: a pointer to a v4l2_capabilitystructure.
The IOCTL calls we’ll be using in this article return 0 on success, and a negative value otherwise.
Open and close a descriptor to the device
Those are easy, so let’s get over it quickly. Your file descriptor can be obtained just like any other using open, and disposed of using close, two basic UNIX I/O system calls:
1
2
3
4
5
6
7
8
9
10
11
12
|
int main(void){
int fd;
if((fd = open("/dev/video0", O_RDWR)) < 0){
perror("open");
exit(1);
}
// ...
close(fd);
return EXIT_SUCCESS;
}
|
Note that we need both read and write access to the device.
Retrieve the device’s capabilities
While v4l2 offers a generic set of calls for every device it supports, it is important to remember that not all devices can provide the same features. For this reason, the first step here will be to query the device about its capabilities and details. This is done through the VIDIOC_QUERYCAP request. Note that every v4l2-compatible device is expected to handle at least this request.
1
2
3
4
5
|
struct v4l2_capability cap;
if(ioctl(fd, VIDIOC_QUERYCAP, &cap) < 0){
perror("VIDIOC_QUERYCAP");
exit(1);
}
|
When this request succeeds, the v4l2_capability structure if filled with information about the device:
- driver: The name of the driver in use while communicating with the device.
- card: The name of the device in use.
- bus_info: The location of the device in the eye of the operating system, in our case, /dev/video0.
- version: Your driver’s version number.
- capabilities: A 32-bit longer integer withholding your device’s capabilities (one bit per capability). You may find the list of all possibles capabilities here. You can use a bitwise & to check for a particular one:
1
2
3
4
|
if(!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)){
fprintf(stderr, "The device does not handle single-planar video capture.\n");
exit(1);
}
|
There are few other fields but I’ll stop here. If you’re interested, you’ll find more details in the links above. Now when it comes to capabilities, it’d be nice to check for the following:
- V4L2_CAP_VIDEO_CAPTURE : we need single-planar video capture, because… we’re capturing video frames.
- V4L2_CAP_STREAMING : we need the device to handle frame streaming so that our queue/dequeueroutine can go fluently.
If your application has more specific needs, don’t hesitate to use the table linked above to check for more capabilities. You may also use the card and bus_info fields if you have several devices available and want the user to choose by name and path.
Set our video format
Once we’ve made sure that our device knows the basics, we need to set our frame format. Note that this format must be made available by your device. If you don’t want to list formats programmatically, I suggest you use v4l2-ctl which will do that for you just fine:
1
|
$ v4l2-ctl -d /dev/video0 --list-formats-ext
|
This will give you a list of all available formats. Once you’ve chosen yours, you’ll need to use VIDIOC_S_FMT(set format) to tell your device. This is done using a v4l2_format structure:
1
2
3
4
5
6
7
8
9
10
|
struct v4l2_format format;
format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
format.fmt.pix.pixelformat = V4L2_PIX_FMT_MJPEG;
format.fmt.pix.width = 800;
format.fmt.pix.height = 600;
if(ioctl(fd, VIDIOC_S_FMT, &format) < 0){
perror("VIDIOC_S_FMT");
exit(1);
}
|
- type: remember that V4L2 can handle all kinds of devices. It’d be nice to tell it we’re doing video capture.
- fmt.pix.pixelformat: this is your frame format (RGB, YUV, MJPEG, …). v4l2-ctl told you which ones you had available, at which resolutions.
- fmt.pix.width / fmt.pix.height: your frame dimensions. Again, must be handled by your device, for the format you chose.
In my case, I chose MJPEG because it is extremely easy to display using the SDL. Plus, it takes less memory than RGB and YUV. As far as I know, MJPEG is supported by many cameras. Also remember that these parameters have a direct influence on the amount of memory you’ll have to request for the buffers later on. For instance, for a 800×600 RGB24 frame, you’ll store 800×600 = 480000 pixels, each one requiring 3 bytes (R, G, B). All in all: 1440000 bytes (about 1.5MB) per buffer.
Retrieving all available formats programmatically : v4l2-ctl uses the VIDIOC_ENUM_FMT call to list your formats. You will find more information about this call (and its fellow v4l2_fmtdesc structure) on this page. To browse all formats, declare your first structure with .index = 0 and keep incrementing until your ioctlreturns EINVAL. Additionally, you might want to have a look at VIDIOC_ENUM_FRAMESIZES to retrieve information about the resolutions supported by your formats.
Inform the device about your future buffers
This step is quite simple, but it’s still necessary: you need to inform the device about your buffers: how are you going to allocate them? How many are there? This will allow the device to write buffer data correctly. In our case, we’ll use a single buffer, and map our memory using mmap. All this information is sent using the VIDIOC_REQBUFS call and a v4l2_requestbuffers structure:
1
2
3
4
5
6
7
8
9
|
struct v4l2_requestbuffers bufrequest;
bufrequest.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
bufrequest.memory = V4L2_MEMORY_MMAP;
bufrequest.count = 1;
if(ioctl(fd, VIDIOC_REQBUFS, &bufrequest) < 0){
perror("VIDIOC_REQBUFS");
exit(1);
}
|
- type: again, the kind of capture we’re dealing with.
- memory: how we’re going to allocate the buffers, and how the device should handle them. Here, we’ll be using memory mapping, but you’ll find that there are a few other options available.
- count: our buffer count, one here (no need to make it trickier by adding buffers for now).
Allocate your buffers
Now that the device knows how to provide its data, we need to ask it about the amount of memory it needs, and allocate it. Basically, the device is making the calculation I made above, and telling you how many bytes it need for your format and your frame dimensions. This information is retrieved using the VIDIOC_QUERYBUF call, and its v4l2_buffer structure.
1
2
3
4
5
6
7
8
9
10
11
|
struct v4l2_buffer bufferinfo;
memset(&bufferinfo, 0, sizeof(bufferinfo));
bufferinfo.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
bufferinfo.memory = V4L2_MEMORY_MMAP;
bufferinfo.index = 0;
if(ioctl(fd, VIDIOC_QUERYBUF, &bufferinfo) < 0){
perror("VIDIOC_QUERYBUF");
exit(1);
}
|
A little difference here: I’m clearing the structure’s memory space before using it. In this case, we won’t be the only one writing into this structure, so will the device. For this reason, since all fields aren’t initialised by the programmer, it’s best to clean up garbage first. Just like before, we tell the device about our video capture and memory mapping. The index field is the index of our buffer: indices start at 0, and each buffer has his own. Since we’ve only got one buffer, there is no need to put that code into a loop. Usually, you’d iterate from 0 to bufrequest.count (which may have changed after the IOCTL if the device didn’t like it!) and allocate each buffer, one after the other.
Now, once this call has been made, the structure’s length and m.offset fields are ready. We can therefore map our memory:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
void* buffer_start = mmap(
NULL,
bufferinfo.length,
PROT_READ | PROT_WRITE,
MAP_SHARED,
fd,
bufferinfo.m.offset
);
if(buffer_start == MAP_FAILED){
perror("mmap");
exit(1);
}
memset(buffer_start, 0, bufferinfo.length);
|
Since memory mapping is a large topic, here is a link to mmap‘s man page, and more information about memory mapping under Linux (Linux Device Drivers, J. Corbet, A. Rubini, G. Kroah-Hartman). You don’t have to know everything about that to go further, but that’s another fascinating subject if you’re curious.
Here again, think about cleaning up the area. Your frame is going to be stored in there, you don’t want garbage messing around.
Get a frame
This is the part of the code you might want to put in a temporised loop. For this article, I’ll just retrieve one frame from the device and terminate. This is done in three steps:
- Prepare information about the buffer you’re queueing. This requires another v4l2_buffer structure we saw above, nothing new. This helps the device locating your buffer.
- Activate the device’s streaming capability (which we checked earlier through v4l2_capabilities).
- Queue the buffer. You’re basically handing your buffer over to the device (putting it into the incoming queue), and wait for it to write stuff in it. This is done using the VIDIOC_QBUF call.
- Dequeue the buffer. The device’s done, you may read your buffer. This step is handled using the VIDIOC_DQBUF call: you’re retrieving the buffer from the outgoing queue. Note that this call may hang a little: your device needs time to write its frame into your buffer, as said in the documentation:
By default
VIDIOC_DQBUF
blocks when no buffer is in the outgoing queue. When theO_NONBLOCK
flag was given to the open()functionsystem call,VIDIOC_DQBUF
returns immediately with an EAGAIN error code when no buffer is available.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
struct v4l2_buffer bufferinfo;
memset(&bufferinfo, 0, sizeof(bufferinfo));
bufferinfo.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
bufferinfo.memory = V4L2_MEMORY_MMAP;
bufferinfo.index = 0; /* Queueing buffer index 0. */
// Activate streaming
int type = bufferinfo.type;
if(ioctl(fd, VIDIOC_STREAMON, &type) < 0){
perror("VIDIOC_STREAMON");
exit(1);
}
/* Here is where you typically start two loops:
* - One which runs for as long as you want to
* capture frames (shoot the video).
* - One which iterates over your buffers everytime. */
// Put the buffer in the incoming queue.
if(ioctl(fd, VIDIOC_QBUF, &bufferinfo) < 0){
perror("VIDIOC_QBUF");
exit(1);
}
// The buffer's waiting in the outgoing queue.
if(ioctl(fd, VIDIOC_DQBUF, &bufferinfo) < 0){
perror("VIDIOC_QBUF");
exit(1);
}
/* Your loops end here. */
// Deactivate streaming
if(ioctl(fd, VIDIOC_STREAMOFF, &type) < 0){
perror("VIDIOC_STREAMOFF");
exit(1);
}
|
Again, this part of your code should be in a loop if you’re using several buffers (increment bufferinfo.index) :
1
2
3
4
5
6
7
8
9
|
/* ioctl: VIDIOC_STREAMON */
while(capture_is_running){
for(i = 0; i < bufrequest.count; i++){
bufferinfo.index = i;
/* ioctl: VIDIOC_QBUF */
/* ioctl: VIDIOC_DQBUF */
}
}
|
Once the VIDIOC_DQBUF ioctl call has successfully returned, you’re buffer(s) is/are filled with your data. In my case, I have a beautiful MJPEG frame ready to be processed. If you’re using RGB or YUV, you are now able to get color information about everything single pixel of your frame: we did it!
Note (see comments for more information) : as I said earlier, this routine (stream on, queue/dequeue, stream off) might not work for you. Some devices will refuse to get into streaming mode if there isn’t already a buffer queued. In this case, your program should look more like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
|
struct v4l2_buffer bufferinfo;
memset(&bufferinfo, 0, sizeof(bufferinfo));
bufferinfo.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
bufferinfo.memory = V4L2_MEMORY_MMAP;
bufferinfo.index = 0; /* Queueing buffer index 0. */
// Put the buffer in the incoming queue.
if(ioctl(fd, VIDIOC_QBUF, &bufferinfo) < 0){
perror("VIDIOC_QBUF");
exit(1);
}
// Activate streaming
int type = bufferinfo.type;
if(ioctl(fd, VIDIOC_STREAMON, &type) < 0){
perror("VIDIOC_STREAMON");
exit(1);
}
while(/* main loop */){
// Dequeue the buffer.
if(ioctl(fd, VIDIOC_DQBUF, &bufferinfo) < 0){
perror("VIDIOC_QBUF");
exit(1);
}
bufferinfo.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
bufferinfo.memory = V4L2_MEMORY_MMAP;
/* Set the index if using several buffers */
// Queue the next one.
if(ioctl(fd, VIDIOC_QBUF, &bufferinfo) < 0){
perror("VIDIOC_QBUF");
exit(1);
}
}
// Deactivate streaming
if(ioctl(fd, VIDIOC_STREAMOFF, &type) < 0){
perror("VIDIOC_STREAMOFF");
exit(1);
}
|
(of course, you might need another loop in the first one if you decided to use several buffers)
In the example I gave throughout this article, you should now be closing your descriptor to the device. However, here are two little complements if you’re using MJPEG as I was: print your frame to a JPEG fileand display your frame into a SDL frame. I’ll assume you know about basic UNIX I/O routines and SDL mechanisms, since this isn’t the topic of this article.
Bonus: save your frame as a JPEG file
MJPEG is nothing but an animated extension of the JPEG format, a sequence of JPEG images. Since we captured a single frame here, there is no real MJPEG involved: all we have is a JPEG image‘s data. This means that if you want to transform your buffer into a file… all you have to do is write it:
1
2
3
4
5
6
7
8
|
int jpgfile;
if((jpgfile = open("/tmp/myimage.jpeg", O_WRONLY | O_CREAT, 0660)) < 0){
perror("open");
exit(1);
}
write(jpgfile, buffer_start, bufferinfo.length);
close(jpgfile);
|
Now, since we’re only writing the JPEG data, and not the associated metadata, it is very likely that your image reader will refuse to display anything, claiming it was unable to determine the frame’s dimensions. I’m giving you a little piece of the puzzle, but I didn’t try to write the JPEG’s metadata myself, since this wasn’t part of my program’s needs. Note that some readers will allow you to specify your image’s width.
Bonus: displaying a MJPEG frame with the SDL
The SDL (1.2) has a very interesting feature: it can display a frame using a MJPEG directly! No need to convert your image, or to make it go through never-ending processing, all you have to do is provide SDL with your buffer, your dimensions, and it’ll do the rest. For that, we’ll need both the SDL and the SDL Image Library (SDL_Image). The basic setup is as follows:
- Initialise the SDL, the screen surface and SDL_Image.
- Create a I/O stream (RWops) associated with your buffer.
- Create a SDL surface using the previous stream as a data source.
- Blit the surface wherever you want your frame to be.
- Flip the screen!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
// Initialise everything.
SDL_Init(SDL_INIT_VIDEO);
IMG_Init(IMG_INIT_JPG);
// Get the screen's surface.
SDL_Surface* screen = SDL_SetVideoMode(
format.fmt.pix.width,
format.fmt.pix.height,
32, SDL_HWSURFACE
);
SDL_RWops* buffer_stream;
SDL_Surface* frame;
SDL_Rect position = {.x = 0, .y = 0};
// Create a stream based on our buffer.
buffer_stream = SDL_RWFromMem(buffer_start, bufferinfo.length);
// Create a surface using the data coming out of the above stream.
frame = IMG_Load_RW(buffer_stream, 0);
// Blit the surface and flip the screen.
SDL_BlitSurface(frame, NULL, screen, &position);
SDL_Flip(screen);
// Free everything, and unload SDL & Co.
SDL_FreeSurface(frame);
SDL_RWclose(buffer_stream);
IMG_Quit();
SDL_Quit();
|
And there you go, don’t forget your -lSDL and -lSDL_Image switches so that your link editor succeeds. You should now be able to see an SDL window with your frame in it. Add some loops in your code, and you’ll build yourself a simple camera streamer! If you need more information about this API/framework, here is a link to the documentation I used to write this article. Don’t hesitate to go through it if you have time!
Any way, that’s pretty much all I wanted to cover today. See you later!
Absolutely brilliant, works perfect on the raspbian in the raspberry pi 2. Thanks for the post.
I found this article extremely enlightening. Thank you. This well-annotated example code is just what I needed to get a hang of how to use v4l2. As you state, there are too few such examples available. I do, however, have a few suggestions:
Adding the necessary includes would be useful to your target audience.
Using “cap” rather than “capabilities” in “Retrieve Device’s Capabilities” would make it consistent with the subsequent section.
if(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE) needs to be
if(!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) or an equivalent fix.
Also, I found that I needed to do an initial ioctl(fd, VIDIOC_QBUF, &bufferinfo) before the ioctl(fd, VIDIOC_STREAMON, &type). Otherwise, I get a “VIDIOC_STREAMON: Operation not permitted” error. It then appears the the order of QBUF and DQBUF in the loop needs to be reversed. (I suppose there is a better way that doesn’t do the QBUF ioctl in two different places.)
I am using the Raspbian system on a Raspberry Pi 2. Maybe that makes a difference.
I brought a few edits, thanks !
I have chosen not to add the includes since I believe they would make the code samples much heavier. I am not really targetting a copy/paste audience: the code samples I give are illustrative, but purposedly not self-sufficient. The article describes a method, a way to do things, but my readers should remember the rule of diversity, and “distrust all claims for one true way”.
Concerning the QBUF and STREAMON ioctls, I’m afraid I cannot agree. The QBUF and DQBUF calls are related to the streaming mechanism. Streaming mode needs to be activated before the process starts queueing and dequeuing buffers, since that’s exactly what streaming is. Additionnaly, streaming does not need to be switched on for every buffer, that would make an unecessary ioctl call which could be disturbing since we’re capturing continuous video frames. Depending on your device, your camera might not be able to handle streaming the way it was described in this article, in which case you’d have to refer to the documentation in order to use the QBUF and DQBUF ioctls another way (through simple memory mapping) 😉
Thank you for your input!
Another site I checked had the following example code (without much annotation):
http://linuxtv.org/downloads/v4l-dvb-apis/capture-example.html
It is there that I saw that the QBUF was first called on each of its buffers in its start_capturing() routine before STREAMON was called. Its main_loop()->read_frame() routine then calls DQBUF followed by QBUF in that order. That program worked directly without modification on my Raspberry Pi 2.
That site seems authoritative with respect to v4l2 as of December, 2014, as evident in
http://linuxtv.org/downloads/v4l-dvb-apis/v4l2spec.html
where the specs seem nearly current. I am hoping that the solution is some overlooked setting in QUERYBUF since what you do seems more reasonable.
Oh, I see the difference you’re referring to. The mechanism is almost identical, but an extra call to QBUF is made before the streaming capability is activated. Now, since QBUF is called again every time a frame is captured (dequeued), you can see that the “main loop” remains pretty much the same:
– In your example: Queue once, stream on, dequeue/queue in a loop, end (stream off).
– In this article: Stream on, queue/dequeue in a loop, end (stream off).
Now in my opinion, the second makes more sense because the process will exit its loop without any buffer queued in the driver. From my point of view, this allows cleaner termination of the capturing process. It might also prevent some error cases: if STREAMON fails, the process exits without any buffer queued and waiting.
However, it seems (from your attempts) that this routine isn’t really accepted by all devices. My own camera will capture frames whether I queue the buffers before STREAMON or after (in the loop). I believe this is the case for a lot of devices, since most of the (few) references I could find (from users) are written this way. I will however bring an edit to my article, in case another reader is in your case 😉 Thank you again!
Hi all!
I’d really thank both of you, John and Craig.
This article is crystal clear and Craig’s comments solved my issue!
Regards.
Paco V.
Hello,
I think you should change the following lines from:
struct v4l2_capability capabilities;
if(ioctl(fd, VIDIOC_QUERYCAP, &capabilities) < 0){
perror("VIDIOC_QUERYCAP");
exit(1);
}
to:
struct v4l2_capability cap;
if(ioctl(fd, VIDIOC_QUERYCAP, &cap) < 0){
perror("VIDIOC_QUERYCAP");
exit(1);
}
I would be more consistent with the rest of the snippets.
Also, there is a small typo in this sentence: "This is the part of the code you might want to put is a temporised loop."
I think you mean "in a temporised loop" (not is).
Otherwise this is very great article.
thanks.
Hey ssinfod, thanks for your comment!
I’ve brought the edits you suggested.
Great write-up. I think though that in the last code block (on line 20) the variable should be ‘frame’ instead of ‘picture’ as ‘picture’ variable has not been initialized.
20: picture = IMG_Load_RW(buffer_stream);
I also had to add ‘0’ as a second argument to the IMG_Load_RW call but this might be related to my version of the SDL-library
Article edited, thanks! The frame thing was a mistake of mine, I’ve done way to much copy/pasting for that one… :s
As for IMG_Load_RW it seems SDL 1.2.8 has a second argument, yes. I have no idea when it happenned though…
Hi thanks for the info, I’ve been looking for a good tutorial on how to use V4L2. I have however encountered problems with regards to getting external USB webcams to work, was wondering if you could help me? I launched a question at StackOverflow here: http://stackoverflow.com/questions/38018360/v4l2-not-working-for-usb-camera-but-works-with-laptop-webcam
Very useful article. Thanks a lot!
Hi thanks, your article.
I have a question i can query device for supported setting, but when i try to set it for example brightness i have an error : Request or argp is not valid !
I try to add a mutex to ensure ioctl request is not overlapped, but it don’t work.
Before starting acquisition VIDIOC_G_CTRL work but after VIDIOC_G_CTRL or VIDIOC_S_CTRL don’t work with this error.
Do you known how change setting during acquisition or do i need to stop acquisition before changing something and restart it after ?
Thanks for your help.
Thank you for your post. I agree that adding all of the dependencies to your code examples would have obscured the simple concepts that you were attempting to portray.
For those looking for a working example, I found the following quite useful:
https://github.com/twam/v4l2grab
Cheers!
Thanks for the link! The main V4L2 includes are indeed
linux/videodev2.h
andlibv4l2.h