Birds Eye View - Evaluating Birds Eye View project
⇦ Getting_the_Code/How_to_get_the_code | Home | Getting_the_Code/Building_and_Installation_Guide ⇨ |
Getting Started |
---|
User Guide |
Calibration Guide |
GStreamer |
Performance Measurements |
Contact Us |
Bird's eye view is a research project by RidgeRun Engineering.
Customers
To receive an evaluation version please contact RidgeRun:
Evaluation Guide
Dependencies
In order to evaluate the BEV you need to have fulfilled the next dependencies:
- boost
- opencv 4.1.0, or higher version, with GStreamer (and CUDA support enabled if your machine is CUDA-capable). For this, take a look at the instructions in Building_OpenCV_with_GStreamer_support and Building_OpenCV_with_GStreamer_and_CUDA_support.
- GStreamer
- CUDA
Installation (x86)
You will receive an eval version package with the following structure:
. |-- dataset | `-- downloadDataset.sh |-- examples | |-- benchmarkMinimal | |-- bevFromCameras | |-- bevFromFisheyeImages | |-- bevFromFisheyeVideo | |-- bevFromFisheyeVideoCuda | |-- bevFromFisheyeVideoGStreamer | |-- bevFromRectilinearImages | |-- bevFromRectilinearVideo | |-- bevFromRectilinearVideoGStreamer | |-- bevToOutputImage | |-- bevToOutputVideo | |-- removeFisheyeFromImages | `-- stitching |-- include | `-- bev | |-- bev.h | |-- core | | |-- iRemap.h | | |-- iResize.h | | |-- iWarpAffine.h | | `-- iWarpPerspective.h | |-- engine.h | |-- frameworks.h | |-- iDisplay.h | |-- iFrameworkFactory.h | |-- iImageSaver.h | |-- iLoader.h | |-- iStitcher.h | |-- iVideoSaver.h | |-- image.h | |-- json | | `-- fileSettingsLoader.h | |-- probe.h | |-- runtimeError.h | `-- settings.h |-- lib | `-- x86_64-linux-gnu | |-- libbev.so -> libbev.so.0 | |-- libbev.so.0 -> libbev.so.0.4.1 | |-- libbev.so.0.4.1 | `-- pkgconfig | `-- bev.pc `-- prototype |-- autoCalibrationSettings.json |-- carSettings.json |-- defaultSettings.json `-- indoorSettings.json 10 directories, 40 files
In order to use the BEV library, you need to extract its contents in another folder and follow the next steps:
1. Copy the contents of:
├── lib │ └── x86_64-linux-gnu │ ├── libbev.so -> libbev.so.0 │ ├── libbev.so.0 -> libbev.so.0.4.1 │ ├── libbev.so.0.4.1 │ └── pkgconfig └── bev.pc
to your system. This can be done using the next command:
sudo cp -rfv ./lib/x86_64-linux-gnu/* /usr/lib/x86_64-linux-gnu/
Also the contents of:
├── include │ └── bev
must be copied to /usr/include/:
sudo cp -rfv ./include/bev /usr/include
Get the dataset
In order to use our videos and images, you need to download the dataset. This can be done by executing:
./dataset/downloadDataset.sh
This command will download a dataset with the following structure:
dataset/ |-- downloadDataset.sh |-- misc | |-- background.png | |-- topCar.png | `-- transformation.png |-- real | |-- images | | |-- fisheye | | | |-- back.png | | | |-- front.png | | | |-- left.png | | | `-- right.png | | `-- rectilinear | | |-- back.png | | |-- car | | | |-- back.png | | | |-- front.png | | | |-- left.png | | | `-- right.png | | |-- front.png | | |-- left.png | | `-- right.png | `-- videos | |-- fisheye | | |-- camera0.mp4 | | |-- camera1.mp4 | | |-- camera2.mp4 | | `-- camera3.mp4 | `-- rectilinear | |-- back.mp4 | |-- front.mp4 | |-- left.mp4 | `-- right.mp4 `-- synthetic |-- README.md |-- SyntheticImageGeneratorModel.blend `-- stitching |-- back.png |-- city.jpg |-- front.png |-- left.png `-- right.png 11 directories, 31 files
Run the examples
The demos included in the eval use the videos and images under the directory dataset/real/. To run the demos, bevFromFisheyeImages for instance, execute:
cd examples ./bevFromFisheyeImages
Calibration
You can also execute the demo bevFromCameras to use your own cameras. The configuration file for this demo is in prototype/autoCalibrationSettings.json and you must modify it according to your setup.
Currently, the eval version of the library does not include a demo for the calibration script. Therefore it is needed you to provide images for a test calibration. Once the library license has been acquired the full calibration tool including its source code will be delivered.
RidgeRun can generate the calibration values for your cameras, for this please go through the following steps.
Step 1: Get a checkerboard
The library uses a checkerboard pattern to ease the calibration process, which speeds up the parameter extraction needed by the IPM (inverse perspective mapping) transformation.
Please choose a pattern based on your needs with the following guidelines
- Patterns can be downloaded from here.
- Recommended: 7x4 vertices, 8x5 squares (others might work but this is the recommended value)
- Print in white clear paper
- Choose the required printing paper size (A1, A2, A3, etc), based on your target camera distribution size.
- Make sure the paper size will cover at least 1/3 of the required centered image view. Use the following image as an example:
Step 2: Fisheye camera calibration
This step is only required if you have fisheye cameras and you do not know the distortion and camera intrinsic parameters. If you are not using fisheye cameras or you know the camera parameters you can skip to Step 3.
For RidgeRun to perform the calibration of the camera, you will need to provide 40 good calibration images of each camera (front, back, left, and right). All images provided must have the same resolution.
The image below provides an example of good and bad calibration images. Notice that in the good calibration images (in the top row), the checkerboard is relatively close, and in front of the camera with different angles and positions.
In the images not good for calibration (at the bottom row), the chessboard is away from the camera or in angles that make the calibration software fail the chessboard detection.
Step 3: BEV calibration
For this step, you will need to provide one image from each camera, front, back, left, and right, and additionally if possible a short video of each camera to verify the calibration.
To generate the images and videos, please follow these guidelines:
- Make sure the checkboard is perpendicular to the camera plane and placed in the front center of the camera's field of view. Notice that the checkerboard's widest side must be parallel to the camera image's widest side.
- Enough separation (O,P,Q,R) is needed between the camera origin and the checkerboard pattern. These separations can be different from each other depending on your final application, but previous restrictions must be met.
- All cameras must be able to see each red box region (there must be some overlap between the camera views). Examples:
- Front Camera: Must see 1, 2, and A.
- Left Camera: Must see 1,3 and B.