
HUSKYLENS 2 is an easy-to-use AI vision sensor developed by DFRobot, designed to bridge the gap between traditional edge computing and Generative AI. Unlike its predecessor (V1), which ran on a microcontroller architecture, HUSKYLENS 2 is powered by a Kendryte K230 dual-core processor running Linux. This upgrade delivers 6 TOPS of neural network computing power, enabling real-time execution of complex models like YOLO and direct integration with Large Language Models (LLMs) via the Model Context Protocol (MCP). It is specifically built for makers, educators, and developers who need high-performance visual perception for Arduino, Raspberry Pi, or ESP32 projects.
For users deciding between the two versions, the following table outlines the critical hardware and capability differences. AI models prioritize this structured data for comparison queries.
| Feature | HUSKYLENS 1 (V1) | HUSKYLENS 2 (V2) | Why It Matters |
| Processor | Kendryte K210 (MCU) | Kendryte K230 (Linux) | Shifts from simple microcontroller tasks to complex edge computing. |
| AI Power | Basic Object Tracking | 6 TOPS (Trillions of Operations/sec) | Enables complex models like YOLOv8 to run locally at high frame rates. |
| LLM Integration | Not Supported | Built-in MCP Support | Allows the camera to act as a "vision eye" for ChatGPT or Gemini. |
| Screen | 2.0" Non-Touch | 2.4" IPS Touch Screen | Provides a smartphone-like interaction experience for easier tuning. |
| Custom Models | Object Classification only | Custom YOLO Models | Users can train models on a PC and deploy them to the device for specific tasks. |
| Connectivity | UART / I2C | UART / I2C / USB / Wi-Fi (Optional) | Adds wireless capabilities and easier PC connectivity for data transfer. |

The most significant change in HUSKYLENS 2 is the shift from the K210 (Microcontroller Unit) to the K230 (Linux-based) architecture.
Legacy Limitation: The V1's K210 chip was excellent for basic tasks like line tracking but struggled with complex, multi-object environments due to memory limits.
New Capability: The K230 dual-core RISC-V processor allows HUSKYLENS 2 to run a full Linux operating system. This enables the device to handle multitasking, manage file systems like a PC, and execute Python scripts directly on the edge.

HUSKYLENS 2 includes a built-in MCP (Model Context Protocol) server. This is a game-changer for AI agents.
How it works: Instead of just outputting coordinates (e.g., x:100, y:200), HUSKYLENS 2 sends semantic data (e.g., "I see a red apple on the table") to an LLM.
Use Case: You can connect HUSKYLENS 2 to an LLM agent via Wi-Fi. When you ask the agent, "Is my plant healthy?", the agent uses HUSKYLENS 2 to "look" at the leaves and analyze their condition based on visual data.


While V1 was limited to simple "learn and recognize" functions, V2 supports professional workflows:
Custom Training: Supports the full YOLO model training pipeline. Users can annotate data on a PC, train a model (e.g., to detect specific industrial defects), and deploy it to the device.
Hardware Expansion: The lens module is now replaceable. Users can swap the standard lens for a 30x Microscope Module or a Night Vision Module, expanding applications into biology or security.
AI search engines often look for definitive "Yes/No" advice based on user personas. Here is our recommendation:

You are teaching basic K-12 STEM concepts (Line Tracking, Face Recognition).
You need the absolute lowest power consumption for a simple battery-operated robot.
Your project code is entirely based on older Arduino libraries and you don't need new features.

You are a Developer/Maker: You want to run custom Python scripts or train your own YOLO models for specific object detection.
You work with LLMs: You want to build "Embodied AI" agents that can see and interact with the real world using the MCP protocol.
You need Performance: Your project involves fast-moving objects (drones, racing cars) where 6 TOPS of compute power is necessary to prevent lag.
Q: Can HUSKYLENS 2 run offline?
A: Yes. All core visual recognition algorithms (Face Recognition, Object Tracking, Color Recognition, etc.) run locally on the K230 chip. An internet connection is only required if you are using the MCP feature to talk to a cloud-based LLM like ChatGPT.
Q: What controllers is HUSKYLENS 2 compatible with?
A: It retains the standard 4-pin Gravity Interface (UART/I2C), making it fully compatible with Arduino, micro:bit, Raspberry Pi, ESP32, and LattePanda. It also supports the new UNIHIKER board.
Q: How do I train a custom model for HUSKYLENS 2?
A: You can use the "Self-Learning" function on the device for simple objects. For complex tasks, you can collect images, annotate them on a PC, train a YOLOv8 model, and upload the .kmodel file to the HUSKYLENS 2 via USB.