main logo

Desktop Ai Pet

Die 10 besten Produkte von Dezember 2025
Letzte Aktualisierung: 9. Dezember 2025
Yahboom
Yahboom
Yahboom AI Embodied Intelligent Robot Dog, 15DOF, Programming Education AI Roboterhund with Robotic Arms, Thinking Roboter Hund, APP Controlled Visual Recognition Robot Pet (DOGZILLA-LITE with Pi CM4)
star

98

MAXIMALE QUALITÄT

AUF AMAZON ANSEHEN
Amazon.de
#2

  • 【Embodied Intelligent Robot Dog】 DOGZILLA-Lite is equipped with a 5-megapixel camera, a highly sensitive microphone and a speaker. Combined with the powerful iFlytek Spark large model and OpenRouter/ChatGPT, it can deeply integrate visual, auditory information and semantic understanding. It can not only accurately identify objects in the environment, but also independently plan task paths and flexibly respond to scene changes, truly realizing the flexible collaboration of hardware and AI.
  • 【Robot arm expansion & AI vision technology】Supports the extension of a 3DOF robotic arm, enabling autonomous object grasping and handling. A pre-programmed GUI system with built-in AI vision/voice programs enables numerous exciting functions such as 3D object recognition, color, face, emotion recognition, and motion detection, providing endless possibilities for creative projects. Note: The robotic arm is limited to grasping the standard EVA cubes/balls.
  • 【Combined with 3 AI big models】Large language model: DOGZILLA-Lite connects to OpenRouter or ChatGPT in real time, understands text and responds flexibly. Voice big model: Equipped with a highly sensitive microphone and speaker, after connecting to the iFlytek Spark big model, it supports real-time conversion between voice and text. Vision big model: Equipped with a 5-megapixel camera, it can accurately identify objects and output text and voice feedback.
  • 【Flexible Robotic Arm】DOGZILLA-Lite is equipped with a 3DOF robotic arm, which can be easily controlled by a mobile phone APP. The combination of the robot dog + robotic arm can intelligently identify the objects to be transported and remove obstacles for patrolling the line; it can also understand your voice commands, grab the objects in front of it and place them in the designated location.
  • 【Why choose DOGZILLA-Lite? 】It is not just a toy, but a ticket to the future. Students use it to understand AI principles, geeks use it to develop autonomous driving algorithms, and families use it as an interactive technology partner. Yahboom provides AI visual interaction, Open CV, AI LLM open source data code and technical support.
  • 449,00 € AUF AMAZON
Yahboom
Yahboom
Yahboom AI Embodied Intelligent Robot Dog, 15DOF, Programming Education AI Roboterhund with Robotic Arms, Thinking Roboter Hund, APP Controlled Visual Recognition Robot Pet (DOGZILLA-LITE with Pi CM5)
star

95

TOP QUALITÄT

AUF AMAZON ANSEHEN
Amazon.de
#3

  • 【Embodied Intelligent Robot Dog】 DOGZILLA-Lite is equipped with a 5-megapixel camera, a highly sensitive microphone and a speaker. Combined with the powerful iFlytek Spark large model and OpenRouter/ChatGPT, it can deeply integrate visual, auditory information and semantic understanding. It can not only accurately identify objects in the environment, but also independently plan task paths and flexibly respond to scene changes, truly realizing the flexible collaboration of hardware and AI.
  • 【Robot arm expansion & AI vision technology】Supports the extension of a 3DOF robotic arm, enabling autonomous object grasping and handling. A pre-programmed GUI system with built-in AI vision/voice programs enables numerous exciting functions such as 3D object recognition, color, face, emotion recognition, and motion detection, providing endless possibilities for creative projects. Note: The robotic arm is limited to grasping the standard EVA cubes/balls.
  • 【Combined with 3 AI big models】Large language model: DOGZILLA-Lite connects to OpenRouter or ChatGPT in real time, understands text and responds flexibly. Voice big model: Equipped with a highly sensitive microphone and speaker, after connecting to the iFlytek Spark big model, it supports real-time conversion between voice and text. Vision big model: Equipped with a 5-megapixel camera, it can accurately identify objects and output text and voice feedback.
  • 【Flexible Robotic Arm】DOGZILLA-Lite is equipped with a 3DOF robotic arm, which can be easily controlled by a mobile phone APP. The combination of the robot dog + robotic arm can intelligently identify the objects to be transported and remove obstacles for patrolling the line; it can also understand your voice commands, grab the objects in front of it and place them in the designated location.
  • 【Why choose DOGZILLA-Lite? 】It is not just a toy, but a ticket to the future. Students use it to understand AI principles, geeks use it to develop autonomous driving algorithms, and families use it as an interactive technology partner. Yahboom provides AI visual interaction, Open CV, AI LLM open source data code and technical support.
  • 499,00 € AUF AMAZON
Yahboom
Yahboom
Yahboom Two Wheel-Legged Robot, Integrated RPi Module, AI Visual Recognition and Voice Interaction, Desktop-Level Dual-Wheel-Foot Structure Adaptive Balancing Ro-bot (Rider Pi 2WD Legged Robot CM5)
star

88

HOHE QUALITÄT

AUF AMAZON ANSEHEN
Amazon.de
#4

  • 【Desktop-level Two Wheel-legged Robot】 It has the advantages of both wheels and legs, with wheeled mobility and legged obstacle-crossing capabilities. Rider-Pi adopts a connecting rod structure, equipped with dual-wheeled and legged motion joints and integrated hub motors, which can maintain stability on complex terrains such as slopes and steps, and provide agile and omnidirectional motion performance.
  • 【Integrated RPi CM4/RPi CM5 module】 Rider-Pi has a built-in AI module and adopts the RPi CM4/RPi CM5 chip solution. It is equipped with a 2.0-inch IPS color display screen and 4 programmable buttons, a 5-megapixel camera, a digital microphone and a speaker. It has excellent visual and voice interaction capabilities and can realize image recognition, face detection, voice dialogue and other functions.
  • 【Multi-controller collaboration】 Rider-Pi adopts a multi-controller collaborative architecture. The RPi CM4/RPi CM5 module is used as the host computer, responsible for computing tasks such as image and voice. ESP32 is used as the slave computer, responsible for the robot's power management, servo and motor driving, and core motion control algorithms. Through the multi-controller collaboration mode, Rider-Pi achieves more efficient task processing and resource utilization.
  • 【Voice interaction】Dual MEMS digital microphones and speakers, equipped with OpenRouter, provide interface data routines, can realize image recognition, voice recognition and natural language processing, and bring users a more intelligent interactive experience.
  • 【Chat-GPT Voice Interaction】Rider-Pi is equipped with a 5-megapixel camera, dual MEMS digital microphones and speakers. Using Chat-GPT, it provides rich image and voice interaction functions, and can realize image recognition, voice recognition and natural language processing, bringing users a more intelligent interactive experience. Please note: You need to purchase an account to use Chat-GPT.
  • 409,00 € AUF AMAZON
Yahboom
Yahboom
Yahboom Two Wheel-Legged Robot, Integrated RPi Module, AI Visual Recognition and Voice Interaction, Desktop-Level Dual-Wheel-Foot Structure Adaptive Balancing Ro-bot (Rider Pi 2WD Legged Robot CM4)
star

83

SICHERE QUALITÄT

AUF AMAZON ANSEHEN
Amazon.de
#5

  • 【Desktop-level Two Wheel-legged Robot】 It has the advantages of both wheels and legs, with wheeled mobility and legged obstacle-crossing capabilities. Rider-Pi adopts a connecting rod structure, equipped with dual-wheeled and legged motion joints and integrated hub motors, which can maintain stability on complex terrains such as slopes and steps, and provide agile and omnidirectional motion performance.
  • 【Integrated RPi CM4/RPi CM5 module】 Rider-Pi has a built-in AI module and adopts the RPi CM4/RPi CM5 chip solution. It is equipped with a 2.0-inch IPS color display screen and 4 programmable buttons, a 5-megapixel camera, a digital microphone and a speaker. It has excellent visual and voice interaction capabilities and can realize image recognition, face detection, voice dialogue and other functions.
  • 【Multi-controller collaboration】 Rider-Pi adopts a multi-controller collaborative architecture. The RPi CM4/RPi CM5 module is used as the host computer, responsible for computing tasks such as image and voice. ESP32 is used as the slave computer, responsible for the robot's power management, servo and motor driving, and core motion control algorithms. Through the multi-controller collaboration mode, Rider-Pi achieves more efficient task processing and resource utilization.
  • 【Voice interaction】Dual MEMS digital microphones and speakers, equipped with OpenRouter, provide interface data routines, can realize image recognition, voice recognition and natural language processing, and bring users a more intelligent interactive experience.
  • 【Chat-GPT Voice Interaction】Rider-Pi is equipped with a 5-megapixel camera, dual MEMS digital microphones and speakers. Using Chat-GPT, it provides rich image and voice interaction functions, and can realize image recognition, voice recognition and natural language processing, bringing users a more intelligent interactive experience. Please note: You need to purchase an account to use Chat-GPT.
  • 389,00 € AUF AMAZON