My other research idea today is talking about low-cost robotic hand prostheses and/or exoskeleton devices. With the power of a 3D printer, this idea can be achieved, thanks to the low-power edge device, making this idea feasible.

Abstract
The high cost of advanced prosthetic hands and exoskeleton devices continues to limit accessibility, especially for children and adults in low- and middle-income settings. This research proposes the development of a low-cost robotic prosthetic and exoskeleton system that integrates computer vision and adaptive grasp control for enhanced usability. This research aims to design and validate a low-cost robotic hand prosthesis and exoskeleton that leverages computer vision and adaptive grasp to serve both children and adults. The device structures will be created using 3D-printed lightweight materials for cost reduction and customizability. Object and gesture recognition will be enabled through CNN- and YOLO-based models trained on diverse datasets, including hand gestures and everyday objects. Recognized objects will be mapped to a set of adaptive grasp types, with real-time force regulation to ensure safety. Expected outcomes include a low-cost, vision-based prosthetic and exoskeleton prototype, weighing less than 1 kg and capable of executing at least 10 distinct daily-life grasp types, Optimized lightweight gestures, object, and grasp recognition models, achieving >90% accuracy and inference latency under 50ms on embedded hardware, and the system will demonstrate automatic adjustment of grip type and force, with grip forces ranging from 2 N (for fragile objects like paper cups) up to 20 N (for heavier objects like bottles). By reducing device costs to under EUR 500, this research has the potential to democratize access to functional assistive devices for children and adults with limb disabilities, and can greatly improve daily independence, employability, and overall quality of life.
Keywords: adaptive grasp, computer vision, exoskeleton, low-cost, prosthetic hand
Research Background
Recent studies highlight significant progress in applying computer vision to assistive robotics. Sarker et al. (2025) demonstrated a vision-enabled pediatric prosthetic hand with an embedded micro-camera and FPGA, reporting up to 100% recognition accuracy for selected grasp types. Similarly, Chen et al. (2025) applied point-cloud analysis for a soft exoskeleton hand, achieving over 90% grasp success while maintaining computational efficiency for real-time applications. Li et al. (2023) designed a binocular vision–based upper limb exoskeleton capable of detecting objects and guiding grasps in rehabilitation tasks, showing promise for patient recovery.
On the affordability side, Samo et al. (2025) introduced a USD 50 linkage-driven exoskeleton prototype, highlighting that low-cost mechanical design is feasible but noting the absence of intelligent grasp adaptation. Wang et al. (2023) combined EEG–sEMG hybrid control in a wrist exoskeleton, achieving high accuracy in intent recognition but at the cost of increased system complexity and sensor expenses.
In parallel, vision-based gestures and recognition systems on embedded devices have been explored in related domains (Winahyu et al. 2021). Additional progress has been made in lightweight deep learning architectures for embedded deployment, such as MobileNetV3 and YOLOv5-Nano, which support low-latency inference suitable for wearable robotics (Howard et al., 2020; Jocher et al., 2021).
Research Objectives
This research aims:
- To design and fabricate a modular, low-cost robotic hand prosthesis and exoskeleton using 3D-printed components and affordable actuators.
- To develop lightweight computer vision models capable of real-time gesture and object recognition on embedded devices.
- To implement adaptive grasp mechanisms that automatically adjust grip type and force according to object geometry and usage context.
- To evaluate technical performance in terms of recognition accuracy, latency, energy efficiency, and grasp success rate.
- To conduct user trials with children and adults, focusing on safety, intuitiveness, and comfort.
- To benchmark affordability by ensuring a production cost of less than EUR 500 per unit, thus significantly undercutting commercial alternatives

Methodology
The methodology is structured into four key phases: hardware design, computer vision development, adaptive grasp control, and evaluation.
- Hardware and Mechanical Design: The prosthetic hand and exoskeleton structures will be created using 3D-printed lightweight materials for cost reduction and customizability. Low-cost servo motors and cable-driven actuators will be selected for reliable actuation, combined with force and position sensors to provide feedback on grasp performance (Samo et al., 2025).
- Computer Vision and AI Module: Object and gesture recognition will be enabled through CNN- and YOLO-based models trained on diverse datasets, including hand gestures and everyday objects. Vision models will be optimized for embedded deployment using pruning, quantization, and knowledge distillation (Howard et al., 2020; Jocher et al., 2021). Inspired by Winahyu et al. (2021), Raspberry Pi or ESP32-CAM modules will be integrated to handle real-time inference. For depth perception, low-cost stereo cameras or depth sensors will be considered to refine grasp planning through point-cloud data (Chen et al., 2025).
- Adaptive Grasp Mechanism: Recognized objects will be mapped to a set of adaptive grasp types (e.g., pinch, power, lateral), with real-time force regulation to ensure safety. Closed-loop control will be implemented using visual feedback and sensor data to dynamically adjust grip type and strength. This will ensure the prosthesis or exoskeleton can handle fragile items like paper cups as well as heavier tools without requiring manual mode switching (Sarker et al., 2025).
- Evaluation and Testing: Prototypes will be benchmarked for inference accuracy, latency (<50 ms target), and grasp success rate (>90%). User-centered trials will be conducted with child and adult volunteers to assess usability, safety, and adaptability. Comparative cost analysis will be performed against commercial prosthetic devices to validate affordability.
Expected Results
- A low-cost, vision-based prosthetic and exoskeleton prototype, weighing less than 1 kg and capable of executing at least 10 distinct daily-life grasp types (e.g., cylindrical, pinch, lateral, tripod).
- Optimized lightweight gestures, object, and grasp recognition models, achieving >90% accuracy and inference latency under 50ms on embedded hardware.
- The system will demonstrate automatic adjustment of grip type and force, with grip forces ranging from 2 N (for fragile objects like paper cups) up to 20 N (for heavier objects like bottles), ensuring functional dexterity across diverse tasks.
- The prototype will be validated for safe use across children and adults, with grip force capped at 5 N for children and 20 N for adults, and maintaining real-time responsiveness of <50 ms latency in grasp adaptation.
- A detailed cost analysis will show that the device can be produced for less than EUR 500, making it significantly more affordable than existing commercial prostheses and exoskeletons.
- The project will release at least 1 annotated dataset (≥10,000 images), 1 open-source software repository, and complete 3D CAD design files to enable replication and further research by the community.

I hope someone can fund this research idea, to produce more low-cost robotic hand prostheses for low-income society and make a better life for them. Further information please contact me at hansapw@gmail.com
a Technopreneur – writer – Enthusiastic about learning AI, IoT, Robotics, Raspberry Pi, Arduino, ESP8266, Delphi, Python, Javascript, PHP, etc. Founder of startup Indomaker.com