Object detection wherein every activity gets displayed in a live classification or IP address is a trending topic nowadays. Taking advantage of this, if machines can also recognise objects as humans do, then it would be very interesting.
Using the Edge Impulse platform, users can train their AI, ML models without possessing deep knowledge of programming or AI, ML concepts. Edge impulse is a cloud-based platform that incorporates computing in Raspberry Pi for acquiring live videos and images via a camera interface.
It could run on both intranet and internet, thus helping experimenters and hobbyists to project their designs and develop various problem-solving applications. Some examples are:

Live entrance door monitoring
Unknown person alert
Industrial object classification and separation using robotic arms
Fruit counting on a tree or machine separator

Components Required

Raspberry Pi 3 B
USB camera
Edge Impulse website
SD adaptor (32 GB)
HDMI to VGA cable
5 V power adaptor with USB Type-C connector
SD card reader

Development and Working

Download Debian-based Raspberry Pi desktop imager on any PC
Launch Raspberry Pi imager
Choose OS as Raspberry Pi OS (32-bit)
Choose SD card
Select Write
Insert SD card into Raspberry Pi
Connect Raspberry Pi to power supply, keyboard, mouse and monitor
If OS is properly installed, then a new window saying “Welcome to Raspberry Pi Desktop” will appear
Connect USB camera to take a photo
Go to the RPi terminal
Install below commands
• curl -sL https://deb.nodesource.com/setup_12.x | sudo bash –
• sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
• sudo npm install edge-impulse-linux -g –unsafe-perm
Next, go to https://www.edgeimpulse.com/
Enter your name and email id
Sign up for free and login to your account
After that, run Edge Impulse using the following command
• edge-impulse-linux
If the connection is proper, then the Device section of the Edge Impulse Raspberry Pi cam will appear
Here, you can take a photo of any object like a bottle, cup or any face
In the Data Acquisition section, take at least 100 photos of the different objects for training and testing purposes. You can rebalance your data with a splitting ratio of 70:30

After this, go to dashboard and select Labeling Method. It should be bounding boxes (for object detection)
Label all the objects via Labeling Queue

Now go to Impulse design
The image width and height should be 320×320
Change the object detection project name

Save Impulse
In the Image section, configure the processing block and select raw data at top of the screen. You can save parameters either in RGB or grayscale

Now go to Feature generate
Due to different image dimensions, reduction will occur
In the object detection section, the numbers for the Training Cycle and Learning Rate is 25 and 0.015, respectively

Start training
After training the model, get a precision score
For validating your model, go to Model Testing and select classify all

Now go to live classification. In real-time, an object is shown near the USB camera with the relevant label (like bottle or cup)
If you want to see with IP address, run the following command in the RPi terminal
• edge-impulse-linux-runner
Build and download model in Raspberry Pi
Enter the IP address as for live classification in Raspberry Pi


Previous articleATMPs: The Founding Stone Of India’s Semiconductor Era