Home / Hiwonder MentorPi A1 Raspberry Pi Robot Car – Ackerman Chassis. ROS2 AI Coding Robot with Large AI model ChatGPT. SLAM and Autonomous Driving (Advanced Kit with Raspberry Pi 5 4GB)

Hiwonder MentorPi A1 Raspberry Pi Robot Car – Ackerman Chassis. ROS2 AI Coding Robot with Large AI model ChatGPT. SLAM and Autonomous Driving (Advanced Kit with Raspberry Pi 5 4GB)

Quantity
ADD TO CART
BUY IT NOW
  • Detail

    Description

    • Powered by Raspberry Pi 5. compatible with ROS2. and programmed in Python. making it an ideal platform for AI robot development
    • Supports both Mecanum-wheel and Ackermann chassis. allowing flexibility for various applications and meeting diverse user needs
    • Equipped with closed-loop encoder motors. TOF lidar. 3D depth camera. high-torque servos. and other advanced components to ensure optimal performance
    • Supports SLAM mapping. path planning. multi-robot coordination. vision recognition. target tracking. and more
    • Powered by YOLOv5 and ChatGPT. it enables autonomous driving and natural human-robot interaction through deep learning and multimodal AI.

    Product Description

    MentorPi is a smart robot car powered by Raspberry Pi 5 and supports ROS2. It offers two chassis options: Mecanum-wheel and Ackermann-wheel. Equipped with high-speed closed-loop encoder motors. Lidar. a 3D depth camera. and large-torque servos. it delivers high-performance capabilities. These include SLAM mapping. path planning. vision recognition. and autonomous driving. With YOLOv5 model training. MentorPi can detect road signs and traffic lights.

    MentorPi also deploys a Multimodal Large Al Model to support more advanced embodied Al applications. To help you unlock its full potential we offer comprehensive tutorials and videos designed to inspire and support your Al creative projects.

    ① 3D Depth Camera

    The 3D depth camera not only enables AI visual functions but also supports advanced features like depth image data processing and 3D visual mapping and navigation.

    ② Raspberry Pi 5 Controller

    MentorPi is powered by Raspberry Pi 5 controller allowing you to embark on motion control. machine vision. and OpenCV projects.

    ③ STL-19P TOF Lidar

    MentorPi is equipped with lidar. which can realize SLAM mapping and navigation. and supports path planning. fixed-point navigation and dynamic obstacle avoidance.

    ④ High Performance Encoder Motor

    It offers robust force. has a high-precision encoder.and includes a protective end shell to ensure an extended service life.

    1) Dual-Controller Design for Efficient Collaboration

    ① Host Controller

    - ROS Controller (JETSON. Raspberry Pi. etc.)

    - AI Visual image processing

    - Deep neural network

    - Human-Machine Voice Interaction

    - Advanced Al algorithms

    - Simultaneous localization and mapping (SLAM)

    ② Sub Controller

    - ROS expansion board

    - High-Frequency PID Control

    - Motor Closed-Loop Control

    - Servo Control and Feedback

    - IMU Data Acquisition

    - Power Status Monitoring

    1. Integration of Large AI Model with SLAM Mapping and Navigation

    MentorPi combines multimodal large model to understand user voice commands via a large language model. enabling multi-point navigation. Once it arrives at the designated location. it uses a vision language model to gain a deep understanding of the surrounding objects and events. This approach greatly enhances the robot's intelligence. adaptability. and overall user experience. making it better suited to meet real-world needs. The following explanation uses the MentorPi Mecanum wheel version as an example. Its functions are the same as those of the Ackermann version.

    1) Semantic Understanding

    MentorPi leverages a large language model to accurately interpret and analyze user voice commands. enabling a deeper understanding of natural language intent.

    2) Environmental Perception

    Powered by a vision language model. MentorPi can interpret objects in its surroundings and understand the spatial layout of the environment.

    3) Intelligent Navigation

    MentorPi continuously sends environmental data to the vision language model for real-time in-depth analysis. It dynamically adjusts its navigation path based on user voice commands. allowing it to autonomously navigate to designated areas and deliver intelligent. adaptive routing.

    4) Scene Understanding

    With the support of a vision language model. MentorPi can deeply interpret the semantic information of its environment. including surrounding objects and events within its field of view.

    2. Large Model Embodied AI Applications

    MentorPi is equipped with a high-performance AI voice interaction module. Unlike conventional AI systems that operate on unidirectional command-response mechanisms. MentorPi leverages ChatGPT to enable a cognitive transition from semantic understanding to physical execution. significantly enhancing the fluidity and naturalness of human-machine interaction. Combined with machine vision. MentorPi exhibits advanced capabilities in perception. reasoning. and autonomous action—paving the way for more sophisticated embodied AI applications.

    1) Voice Control

    With ChatGPT integration. MentorPi can comprehend spoken commands and carry out corresponding actions. enabling intuitive and seamless voice-controlled interaction.

    2) Color Tracking

    MentorPi utilizes vision language model analysis to detect and lock onto any object within its field of view. With the integration of a PID algorithm. it achieves precise and real-time target tracking.

    3) Vision Tracking

    With the advanced perception capabilities of a vision language model. MentorPi can intelligently identify and lock onto target objects even in complex environments. allowing it to perform real-time tracking with adaptability and precision.

    4) Autonomous Patrolling

    Utilizing semantic understanding from a large language model. MentorPi can accurately detect and track lines of various colors in real time while autonomously navigating obstacles. ensuring smooth and efficient patrolling.

    3. Multimodal Large Model Deployment

    MentorPi is equipped with a high-performance Al voice interaction module. Unlike conventional Al systems that operate on unidirectional command-response mechanisms. MentorPi leverages ChatGPT to enable a cognitive transition from semantic understanding to physical execution. significantly enhancing the fluidity and naturalness of human-machine interaction. Combined with machine vision. MentorPi exhibits advanced capabilities inperception. reasoning. andautonomous action -- paving the way for more sophisticated embodied Al applications.

    1) Large Language Model

    With the integration of the ChatGPT Large Model. MentorPi operates like a super brain- capable of comprehending diverse user commands and responding intelligently and contextually.

    2) Large Speech Model

    With the integration of the Al voice interaction box. MentorPi is equipped with speech input and output capabilities-functionally giving it 'ears' and a 'mouth. Utilizing advanced end-to-end speech-language models and natural language processing (NLP) technologies. MentorPi can perform real-time speech recognition and generate natural. human-like responses. enabling seamless and intuitive voice-based human-machine interaction.

    3) Vision Language Model

    MentorPi integrates with OpenRouter's Vision Large Model. enabling advanced image understanding and analysis.It can accurately identify and locate objects within complex visual scenes. while also delivering detailed descriptions that cover object names. characteristics. and other relevant attributes.

    4. Lidar Function

    Mentor Pi is equipped with lidar. which supports path planning. fixed point navgation. navigation and obstacle avoidance. multiple algorithm mapping. and realizes lidar guard and lidar tracking functions.

    ① Lidar Mapping and Navigation

    MentorPi can realize advanced SLAM functins by lidar. including localization. mapping and navigation. path planning. dynamic obstacle avoidance. Lidar tracking and guarding. etc.

    ② 2D Lidar Mapping Method

    TOF Lidar utilizes the SLAM Toolbox for mapping algorithms and supports fixed-point navigation multi-point navigation. as well as TEB path planning.

    ③ Multi-Point Navigation

    MentorPi is equipped with a high-accuracy Lidar that provides real-time environmental detection. It supports both fixed-point navigation and multi-point navigation. making it suitable for complex navigation scenarios.

    ④ Multi-Robot Cooperation Mapping and Navigation

    By leveraging multi-root communicotion and navigation technolcgy. several robots can colaborate to simultaneously map their surroundings. This enables multi-robot navigation. path planning.

    ⑤ Dynamic Obstacle Avoidance

    Using TOF Lidar. MentorPi can detect obstacles during navigation and intelligently plan its path to effectively avoid them.

    ⑥ Lidar Tracking and Guarding

    MentorPi can work with Lidar to scan and subsequently track a moving target ahead. MentorPi utilizes TOF Lidar toscan the secured area. Upon detecting an intruder. it will automatically turn toward the intruder and activate an alarm.

    5. 3D Depth Camera Function

    ① RTAB-VSLAM 3D Vision Mapping & Navigation

    Equipped with an Angstrong depth camera. Mentor Pi can effectively perceive envronmental changes. allowing for intelligent Al interaction with humans.

    ② Depth Map Data. Point Cloud

    Through the corresponding APl. MentorPi can get a depth map. color image and point cloud of the camera.

    ③ Color Recognition and Tracking

    Working with OpenCV. MentorPi can track specific color. After you select the color on the APP. it emits light of corresponding color and moves with the object of that color.

    ④ Target Tracking

    Through vision positioning of the target object. the target object can be better targeted and tracked.

    ⑤ QR Code Recognition

    MentorPi can recognize the content of custom QR codes and display the decoded information.

    ⑥ Vision Line Tracking

    MentorPi supports custom color selection. and the robot can identify color lines and track them.

    6. Deep Learning. Autonomous Driving

    In the ROS system. MentorPi has deployed the deep learning framework PyTorch. the open source image processing library OpenCV and the target detection algorithm YOLOV5 to help users who want to explore the field of autonomous driving image technology easily enjoy Al autonomous driving.

    ① Road Sign Detection

    Through training the deep learning model library. MentorPi can realize the autonomous driving function with Al vision.

    ② Lane Keeping

    MentorPi is capable of recognizing the lanes on both sides to maintain safe distance between it and the lanes.

    ③ Autonomous Parking

    Combined with deep learning algorithms to simulate realscenarios. side parking and warehousing can be achieved.

    ④ Turning Decision Making

    According to the lanes. road signs and traffic lights. MentorPi will estimate the traffic and decide whether to turn.

    ⑤ YOLO Object Recognition

    Utilize YOLO network algorithm and deep learning model library to recognize the objects.

    ⑥ MediaPipe Development. Upgraded Al Interaction

    MentorPi utilizes MediaPipe development framework to accomplish various functions. such as fingertip recognition. humanbody recognition. 3D detection. and 3D face detection.

    7. Open Source Python Programming

    MentorPi supports python programming. All AI intelligent Python code is open source. with detailed annotations for easy self-study.

    8. Wireless Handle Control

    MentorPi supports wireless handle control and can connect to the robot via Bluetooth to control the robot in real time.

    9. App Control

    WonderPi app supports Android and iOS. Switch game modes easily and quickly to experience various AI games.

    1* A1 (Ackermann) Chassis

    1* Backet set

    1* Raspberry Pi 5 (4GB)

    1* 64GB SD card

    1* Cooling fan

    1* Raspberry Pi power supply cable

    1* RRC Lite controller + RRC data cable

    1* Battery cable

    1* Lidar + 4PIN wire

    1* 8.4V 2A charger (DC5.5*2.5 male connector)

    1* 3D Depth camera

    1* Depth data cable

    1* Wireless controller

    1* Controller receiver

    1* EVA ball (40mm)

    1* Card reader

    1* 3PIN wire (100mm)

    1* WonderEcho Pro AI voice interaction box + Type C cable

    1* Accessary bag

    1* User manual

    213*159*157 mm

    Model: Ackermann chassis version (Depth camera)

    Weight: 1.2kg

    Size: 213*159*157mm

    Chassis type: Ackermann chassis

    Servo type: LD-1501MG servo and LFD-01 anti-blocking servo (Monocular camera version)

    Motor: 310 metal gear geared motor

    Encoder: AB-phase high-accuracy quadrature encoder

    Material: Full metal aluminum alloy chassis. anodizing process

    ROS controller: RRC Lite controller + Raspberry Pi 5 controller

    Control method: App. wireless handle and PC control

    Camera: Angstrong binocluar 3D depth camera

    Lidar: ldrobot STL-19P

    Battery: 7.4V 2200mAh 20C LiPo battery

    OS: Raspberry Pi OS + Ubuntu 22.04 LTS + ROS2 Humble (Docker)

    Software: iOS/ Android app

    Communication method: WiFi/ Ethernet

    Programming language: Python/ C/ C++/ JavaScript

    Storage: 64GB TF card

    Package size: 41*22*18cm

    Package weight: About 2.1kg

  • Customer Reviews
    No comments