Overview:
This project is about Edge Detection & Motion Sensing with OpenCV on Raspberry Pi 4.
In OpenCV, Edge Detection and Motion Sensing serve as pillars of Image Processing and Computer Vision. Through Image Processing, techniques like edge detection refine visual data, highlighting features for an improved image output. In contrast, Motion Sensing in Computer Vision is about interpretation; it discerns and understands movement by analyzing sequences of images, offering a deeper insight into the visual data.
In this project, we will employ OpenCV on a Raspberry Pi 4 to process video frames. For edge detection, we will use the Canny edge detection method on grayscale versions of the frames. To identify motion, we will calculate the Mean Squared Error (MSE) between consecutive grayscale frames; a significant change in MSE indicates motion. This setup provides a real-time visual representation of edges and potential motion in the video feed.
Components Required
We need the following components for this project. You can purchase all items from given links:
| S.N. | Components | Quantity | Purchase Link |
|---|---|---|---|
| 1 | Raspberry Pi 4 | 1 | Amazon | SunFounder |
| 2 | Raspberry Pi Camera | 1 | Amazon | SunFounder |
| 2 | SD Card 16/32 GB | 1 | Amazon | SunFounder |
| 3 | 5V, 3A DC Adapter for RPi | 1 | Amazon | SunFounder |
| 4 | LCD Display (Optional) | 1 | Amazon | SunFounder |
| 5 | Mouse & Keyboard (Optional) | 1 | Amazon | SunFounder |
Raspberry Pi Camera Connection
The Raspberry Pi Camera is a peripheral device developed by the Raspberry Pi Foundation to be used with their series of Raspberry Pi single-board computers. The camera module provides a way to add video/photo capabilities to Raspberry Pi projects.
For this project, we can use a 5 mega-pixel Raspberry Pi Camera.
Simply connect the Camera Module to the Raspberry Pi 4 Board using the Camera Connector.
To use the Camera you need to enable the Camera Module first. Open the Raspberry Pi Configuration Tool by typing sudo raspi-config in the terminal. Navigate to Interfacing Options > Camera and enable it.
What is Mean Squared Error (MSE) and Canny Edge Detection Method?
This project used Canny edge detection method for Edge Detection and Mean Squared Error (MSE) Algorithm for Motion Detection. Together, these steps enable the Canny algorithm to detect edges robustly in an image.
Mean Squared Error (MSE)
MSE is a commonly used metric to measure the difference between two images or signals. It calculates the average squared differences between corresponding values of the two signals or images.
A low MSE indicates that the two are closely related, while a high MSE suggests significant differences. Mathematically, for two images I and J, the MSE is given by:
where m and n are the dimensions of the images.
Canny Edge Detection Method
The Canny method is a multi-step algorithm designed to detect a wide range of edges in images.
The key stages include:
- Noise Reduction: Images are smoothed using a Gaussian filter to eliminate noise.
- Gradient Computation: The gradient magnitude and direction are computed for each pixel, highlighting the potential edges.
- Non-maximum Suppression: This step thins potential edges by ensuring that the gradient magnitude is maximum in the edge direction, reducing the thickness of edges.
- Double Thresholding: It differentiates between strong, weak, and non-edges, ensuring true edges are clear.
- Edge Tracking by Hysteresis: Weak edges are either discarded or promoted to strong edges based on their connectivity to strong edges.
Raspberry Pi Setup, Libraries & Dependencies Installation
OpenCV is required for Edge Detection and Motion Sensing, and other image processing tasks present in the code. Hence you need to install the OpenCV first. Follow the following guide to install OpenCV in your system:
The next step is to install picamera. Therefore install it using pip.
|
1 2 |
pip3 install picamera |
The setup part is complete now. We can move to the Edge Detection & Motion Sensing Project with Raspberry Pi and OpenCV.
Raspberry Pi Python Code for Edge Detection & Motion Sensing with OpenCV
Now let’s develop a Python Code that helps in the Edge Detection and Motion Sensing the OpenCV library and Raspberry Pi Camera.
Python Code
Open Thonny IDE and paste the following code to the Thonny Editor. Save this file with the name “Motion_detection.py” to any location.
Here is a complete Python Code.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
import cv2 import time import numpy as np CAMERA_DEVICE_ID = 0 IMAGE_WIDTH = 320 IMAGE_HEIGHT = 240 MOTION_BLUR = True cnt_frame = 0 fps = 0 def mse(image_a, image_b): err = np.sum((image_a.astype("float") - image_b.astype("float")) ** 2) err /= float(image_a.shape[0] * image_a.shape[1]) return err def lighting_compensation(frame): gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) return cv2.equalizeHist(gray) if __name__ == "__main__": try: cap = cv2.VideoCapture(CAMERA_DEVICE_ID) cap.set(3, IMAGE_WIDTH) cap.set(4, IMAGE_HEIGHT) while True: start_time = time.time() _, frame_raw = cap.read() if MOTION_BLUR: frame = cv2.GaussianBlur(frame_raw, (3,3),0) else: frame = frame_raw compensated_frame = lighting_compensation(frame) frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(frame_gray, 100, 200) if cnt_frame > 0: if mse(frame_gray, frame_gray_p) > 100: print('Frame{0}: Motion Detected using MSE!'.format(cnt_frame)) cv2.imshow('Original', frame) cv2.imshow('Compensated', compensated_frame) cv2.imshow('Gray', frame_gray) cv2.imshow('Edge', edges) end_time = time.time() seconds = end_time - start_time fps = 1.0 / seconds print("Estimated fps:{0:0.1f}".format(fps)); cnt_frame += 1 frame_gray_p = frame_gray if cv2.waitKey(1) == 27: break except Exception as e: print(e) finally: cv2.destroyAllWindows() cap.release() |
Code Explanation
Let’s dive deeper into the code by analyzing it section by section.
|
1 2 3 4 5 6 7 8 9 |
import cv2 import time import numpy as np CAMERA_DEVICE_ID = 0 IMAGE_WIDTH = 320 IMAGE_HEIGHT = 240 MOTION_BLUR = True |
This section imports the necessary libraries and sets up some basic configuration constants. We’re setting up the camera device, resolution, and deciding if we want to use motion blur.
|
1 2 3 |
def mse(image_a, image_b): ... |
This function computes the mean squared error between two images. It’s used later to determine if there’s any motion between successive frames by checking the difference.
|
1 2 3 |
def lighting_compensation(frame): ... |
This function is designed to improve the quality of images captured in varying lighting conditions. It equalizes the histogram of the grayscale image, enhancing its overall visibility.
|
1 2 3 4 5 6 |
if __name__ == "__main__": try: cap = cv2.VideoCapture(CAMERA_DEVICE_ID) cap.set(3, IMAGE_WIDTH) cap.set(4, IMAGE_HEIGHT) |
This part initializes the main execution. It sets up the video capture from the defined camera device and sets the video resolution.
|
1 2 3 4 5 6 7 8 |
while True: start_time = time.time() _, frame_raw = cap.read() ... compensated_frame = lighting_compensation(frame) frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(frame_gray, 100, 200) |
Inside the loop, for every frame, it captures the frame, optionally applies a Gaussian blur, performs lighting compensation, converts the frame to grayscale, and then detects edges.
|
1 2 3 4 |
if cnt_frame > 0: if mse(frame_gray, frame_gray_p) > 100: print('Frame{0}: Motion Detected using MSE!'.format(cnt_frame)) |
This part of the code checks for motion by comparing the current grayscale frame to the previous one using MSE. If the MSE is above 100, it detects motion.
|
1 2 3 4 5 6 7 |
cv2.imshow('Original', frame) ... end_time = time.time() seconds = end_time - start_time fps = 1.0 / seconds print("Estimated fps:{0:0.1f}".format(fps)); |
Processed frames are displayed, and the FPS (Frames Per Second) of the processing is calculated and printed.
|
1 2 3 4 5 6 7 |
if cv2.waitKey(1) == 27: break ... finally: cv2.destroyAllWindows() cap.release() |
If the ‘Esc’ key (with ASCII value 27) is pressed, the loop breaks. And in the end, irrespective of how the main loop was exited, the code ensures the OpenCV windows are closed and resources are freed.
Testing & Results of Edge Detection & Motion Sensing
Now we need to run the code for Edge Detection & Motion Sensing with Raspberry Pi OpenCV.
After running the code, 4 windows will open as shown below.
The first window will show the orginal image. The 2nd window will show converted Grayscale Image. On the 3rd window, the compensated image will be shown which appears more clear. On the 4th images the edge detection is shown.
The Thonny Shell will display the estimated frame per second.
It will also display the motion detection status. This means whenever an object moves or there is change in frame, “motion detected” will be displayed on Thonny Shell.
Conclusion
The Edge Detection & Motion Sensing Project with Raspberry Pi & OpenCV is a comprehensive demonstration of integrating multiple image processing techniques using OpenCV in a real-time scenario. It captures video frames, enhances them through lighting compensation, detects edges using the Canny edge detection, and monitors frame-to-frame changes to detect motion via the Mean Squared Error method.
Moreover, it displays the processed images in real-time and calculates the frames per second to gauge the performance. This combination of techniques not only showcases the power and flexibility of OpenCV but also lays the groundwork for more advanced surveillance or monitoring systems that can operate effectively in varying lighting conditions and detect motion.















