Revolutionizing Safety: Unveiling the Power of Safety Bubble Detectors in Robotics

By Rajesh Mahapatra

Senior Manager

Analog Devices

By Anil Sripadarao

Principal Engineer

Analog Devices

By Prasanna Bhat

Embedded Software Engineer

Analog Devices

By Colm Prendergast

Senior Principal Research Scientist

Analog Devices

By Shane O’Meara

Senior Manager

Analog Devices

By Dara O’Sullivan

System Applications Director

Analog Devices

By Anders Frederiksen

Principal Specialist

Analog Devices

By Sagar Walishetti

Software Engineer

Analog Devices

August 07, 2024

Blog

Revolutionizing Safety: Unveiling the Power of Safety Bubble Detectors in Robotics

This article will explain the architecture of real-time safety bubble detection that includes challenges for developing a modular solution, optimizing such a high data bandwidth application to run at 30 frames per second (FPS), and designing a multithread application and algorithm to accurately detect objects close to the ground.

Robots are becoming increasingly prevalent in various industries, improving efficiency and productivity. However, to ensure the safety of humans and assets in close proximity, robots must be equipped with collision detection and stop capabilities. Safety bubble detectors are designed to detect the presence of objects or people within a designated safety zone.

This article focuses on implementing a Safety Bubble Detector application using the Analog Devices EVAL-ADTF3175D-NXZ time of flight platform. The ADTF3175D module has a field of view (FoV) of 75°. To cover a wider FoV in real-world applications, multiple sensors are combined. For example, to cover 270° FoV, four modules are used. The safety bubble detection algorithm runs on the i.MX8MP processor featured in the EVAL-ADTF3175D-NXZ platform. The algorithm captures the depth images from the sensor and detects any object within the safety bubble radius. To facilitate integration into robotics applications, the Safety Bubble Detection application is implemented using the robot operating system (ROS) framework. The algorithm is highly optimized to achieve a frame rate of 30 FPS on this platform. Safety bubble detectors serve as a fundamental component of automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). The safety zone is typically represented as a virtual circular area around an AGV/AMR, indicated by the red circle in Figure 1.

Figure 1. A safety bubble detector.

Safety bubble detectors are fundamental to any AGV/AMR system. As shown in Figure 2, a safety bubble detector has been constructed using four EVAL-ADTF3175DNXZ modules to cover a FoV of 278°. The modules are arranged in a horizontal setup, with each time of flight (ToF) module positioned at 67.5° relative to each other. This configuration helps reduce blind spots and provides a FoV of 278°.

To facilitate communication between the ToF modules and the host system, the ROS publisher-subscriber model is employed, as illustrated in Figure 3. In this setup, the Ethernet over USB is used for communication to ensure data integrity and enhance communication speed.

Figure 2. A horizontal setup. (a) Top view. (b) Front view.

A safety bubble detection algorithm is used to detect objects within the safety bubble radius. The detection flag is transmitted as ROS topic, allowing the host machine to subscribe to all the modules’ topics and combine the detection results. Additionally, the modules publish depth, IR, and output images for further analysis. ROS provides effective visualization tools, such as rviz, which can visualize published topics. The application is designed to be highly configurable and passes parameters to the ROS nodes for adjusting camera position, rotation, and other configuration values.

A multithreaded architecture is implemented in the application, as shown in Figure 4. Three threads, namely input, process, and output, run in parallel. The design aims to minimize latency and ensure that the processing block continuously operates on the most recent accessible frame. The input thread reads the image from the ToF module and updates the input queue, while the process thread takes the input queue and runs the safety bubble detection algorithm, publishing the detected flag and pushing the output to the output queue. The output thread reads the output queue and publishes the topics for visualization. In real-time scenarios where the processing block has a lower frame rate than the input thread, previous frames are discarded to prioritize the most recent frame with minimal latency.

Figure 3. Host as a subscriber and ROS nodes as publishers architecture.

Figure 4. A multithreaded program.

Communication between the host and ToF modules occurs via the ROS publisher-subscriber model, using the TCP/IP protocol. The host combines the published output images from the ROS nodes (ToF modules) and publishes the combined output.

Shown in Figure 5, the host machine is the NVIDIA® Jetson Xavier NX, which powers and communicates with all four ToF modules using Ethernet over a USB protocol.

The default radius of the safety bubble is one meter, which can be configured in the ROS launch files. If an object is detected within this area, the object detection flag is triggered and sent to the host via a ROS topic. The host machine subscribes to the object detection topic from each ToF module. The results are combined using a simple logical operating range (OR) operation, indicating the presence of an object if any of the sensors detect it within the safety bubble.

Figure 5. Horizontal setup with NVIDIA Jetson Xavier NX.

For visualization, the sensors convert the acquired image to its top view and mark the objects in green and red pixels depending on whether they are inside or outside the safety bubble. This image is also published as a ROS topic by each sensor and the host machine combines them into a combined image. Figure 6 shows the combined image of all the published output image topics.

Figure 6. Combined top view of four TOF modules.

For visualization, a square box is drawn on the top left corner to show object detection status (green: objects not detected, red: objects detected). See Figure 7.

These images can be visualized using the ROS tool rviz. In addition, the NVIDIA Jetson Xavier NX can be connected to a monitor using an HDMI cable to see the output. Visualizations such as the depth image, point cloud, and top view of the input image can be enabled for analysis. These visualizations provide more detailed information and insights into the detected objects and their spatial relationships. See Figure 8.

Figure 7. Visualization. (a) Objects not detected. (b) Objects detected.

Figure 8. Visualization (debug images for analysis).

SQA Process Used

Standard software quality assurance (SQA) methodologies are used to ensure software safety and quality.

  • Unit test: ROS supports multiple levels of unit testing.
  • Library unit testing: ROS-independent libraries are tested.
  • ROS node unit test: Node unit tests start the node and test its external API, that is, published topics, subscribed topics, and services.
  • Code coverage: Code coverage analysis is done by one of the packages of ROS, which helps to eliminate the dead code and increase the unit test quality.
  • Documentation: ROS has a package called ros_doc_lite, which gives a doxygen format document for the source files.
  • Clang format is used to format the code and Clang-tidy is used to maintain the ROS coding style guidelines.

The safety bubble detector reliably detects objects of various shapes, colors, and sizes, including cables as small as 5 mm thick.

The algorithm’s low latency of 30 ms ensures real-time object detection and response.

By leveraging the ROS framework extensively for interface and visualization, the application is highly portable. It is compatible with any host machine that uses ROS and will reduce customers’ time to market.

A ToF sensor’s accuracy is lower for transparent and low-reflective objects. This results in the late detection of some objects like glass bottles and plastic balls. As an example, Figure 9 depicts the distance at which objects are detected by the algorithm (the safety radius is set to 100 cm). The y-axis represents test objects. Glass bottle (12, 7) implies that the glass bottle is 12 cm tall and 7 cm wide; if only one parameter is present in brackets, the radius of the object or length of the cube is indicated. See Table 1 for a summary of the Safety Bubble Detector specifications.

Figure 9. The detection accuracy.

Table 1. Safety Bubble Detector Specifications

Metric

Value

Remarks

Detection latency

30 ms

Image resolution: 512 × 512

Detection area

    Circular/rectangular with configurable area

    Default: Circular area with
a radius of 1 meter

Field of view

75°

With 1 sensor

Conclusion

This safety bubble detector comprised of the ADTF3175D and EVAL-ADTF3175DNXZ ToF platform has many advantages. It is highly optimized for the i.MX8MP platform, achieving a smooth performance of 30 FPS. It utilizes a multithread approach to minimize latency, ensuring quick and responsive operation. It also implements SQA methodologies to ensure software safety and maintain quality standards.

Acknowledgments

We would like to thank the ADI-TOF SDK Team for their support.

References

 


Rajesh Mahapatra has 30+ years of work experience and is working in the Software and Security Group of Analog Devices Bangalore. He is passionate about solving customer problems using algorithms and embedded software working on ADI hardware solutions. He works closely with NGOs to plant trees and to provide training to urban economically challenged people to generate livelihood. He has five patents in the system, image processing, and computer vision area.

Anil Sripadarao joined Analog Devices, Inc. in 2007 and is working for the Software and Security Group of ADI Bangalore. His areas of interest include audio/video codecs, AI/ML, computer vision algorithms, and robotics. He holds six patents in image processing and computer vision domain.

Prasanna Bhat is an embedded software engineer from Software & Security Group, Analog Devices, Bangalore. Prasanna’s expertise lies in the intersection of software development and cutting-edge technologies. His work spans various domains, including robotics, deep learning, embedded systems, Python GUIs, and image processing algorithms for time of flight (ToF) sensor applications.

Colm Prendergast is a senior principal research scientist with the Analog Garage where he works on algorithms and systems for autonomous robotics sensing applications. Colm joined Analog Devices, Inc. in 1989 as a design engineer in Limerick, Ireland. During his career at ADI, Colm has worked on and led projects in a wide variety of applications areas including digital video, audio, communications, DSP, and MEMS. Colm led ADI’s cloud technology development effort in the IoT space as director of IoT cloud technology, worked on ADI’s autonomous vehicle technology, and most recently on ADI’s robotics perception and navigation technology development.

Based in Ireland, Shane O’Meara is a senior manager, working in the field of software systems design engineering within the Industrial Automation business unit at Analog Devices, Inc. with a focus on industrial robotics. He joined ADI in 2011 as a product applications engineer driving technology developments for precision ADCs in motor control applications. Shane holds a bachelor's in engineering degree from the University of Limerick. Before he joined ADI, he worked in various roles focusing on automotive electronics and vision systems.

Dara O’Sullivan is a system applications director with the Industrial Edge, Motion, and Robotics business unit at Analog Devices. His area of expertise is power conversion, control, and monitoring in industrial motion control applications. He received his B.E, M.Eng.Sc., and Ph.D. degrees from University College Cork, Ireland, and has worked in industrial and renewable energy applications in a range of research, consultancy, and industry positions since 2001.

Based in Denmark, Anders Frederiksen is the senior strategic marketing manager for robotics and emerging technologies in the Connected Motion and Robotics business unit at Analog Devices, Inc. Anders has over 25 years of industry experience in digital IC design and product and executive management across the world and in both multinational and several start-up companies, in the semiconductor and robotic industries. He is a regular presenter at industry conferences and forums. He joined ADI in 1998 as a system engineer on power electronics and motor control in Norwood (MA) and has held various roles within the organization of ADI driving technology developments, market engagements, and sales strategies internationally and locally. Anders holds a master of science in electrical engineering degree with honors from the Technical University of Denmark. Before he joined ADI, he worked as an assistant professor at the Technical University of Denmark.

Sagar Walishetti works as a software engineer in the Software and Security Group of Analog Devices, Bangalore. He joined ADI in 2019. He has worked in embedded systems, image processing, robotics, and deep learning. One of the most exciting initiatives he worked on while at ADI was the merger of these domains.

Rajesh Mahapatra has 30+ years of work experience and is working in the Software and Security Group of Analog Devices Bangalore. He is passionate about solving customer problems using algorithms and embedded software working on ADI hardware solutions. He works closely with NGOs to plant trees and to provide training to urban economically challenged people to generate livelihood. He has five patents in the system, image processing, and computer vision area.

More from Rajesh

Anil Sripadarao joined Analog Devices, Inc. in 2007 and is working for the Software and Security Group of ADI Bangalore. His areas of interest include audio/video codecs, AI/ML, computer vision algorithms, and robotics. He holds six patents in image processing and computer vision domain.

More from Anil

Prasanna Bhat is an embedded software engineer from Software & Security Group, Analog Devices, Bangalore. Prasanna’s expertise lies in the intersection of software development and cutting-edge technologies. His work spans various domains, including robotics, deep learning, embedded systems, Python GUIs, and image processing algorithms for time of flight (ToF) sensor applications.

More from Prasanna

Colm Prendergast is a senior principal research scientist with the Analog Garage where he works on algorithms and systems for autonomous robotics sensing applications. Colm joined Analog Devices, Inc. in 1989 as a design engineer in Limerick, Ireland. During his career at ADI, Colm has worked on and led projects in a wide variety of applications areas including digital video, audio, communications, DSP, and MEMS. Colm led ADI’s cloud technology development effort in the IoT space.

More from Colm

Based in Ireland, Shane O’Meara is a senior manager, working in the field of software systems design engineering within the Industrial Automation business unit at Analog Devices, Inc. with a focus on industrial robotics. He joined ADI in 2011 as a product applications engineer driving technology developments for precision ADCs in motor control applications. Shane holds a bachelor in engineering degree from the University of Limerick.

More from Shane

Dara O’Sullivan is a system applications director with the Industrial Edge, Motion, and Robotics business unit at Analog Devices. His area of expertise is power conversion, control, and monitoring in industrial motion control applications. He received his B.E, M.Eng.Sc., and Ph.D. degrees from University College Cork, Ireland, and has worked in industrial and renewable energy applications in a range of research, consultancy, and industry positions since 2001.

More from Dara

Based in Denmark, Anders Frederiksen is the senior strategic marketing manager for robotics and emerging technologies in the Connected Motion and Robotics business unit at Analog Devices, Inc. Anders has over 25 years of industry experience in digital IC design and product and executive management across the world and in both multinational and several start-up companies, in the semiconductor and robotic industries. He is a regular presenter at industry conferences and forums.

More from Anders

Sagar Walishetti works as a software engineer in the Software and Security Group of Analog Devices, Bangalore. He joined ADI in 2019. He has worked in embedded systems, image processing, robotics, and deep learning. One of the most exciting initiatives he worked on while at ADI was the merger of these domains.

More from Sagar