Welcome! We’re Justin and Matt, currently working on our MSc dissertations at Imperial College London.
Our research focuses on Vision-Language Action (VLA) models, with an emphasis on evaluating their performance in real-world settings using the SO-101 robotic arm.
We’ll be sharing regular updates on our progress, experiments, and findings as the project evolves.
Posts
-
Multicolored-Block-Pick-And-Place-Dataset
Today, we collected a new pick-and-place dataset with 200 episodes using four different colored blocks. Most of the episodes were collected by picking up the object on the left and placing it in a bin on the right.
-
Are VLAs Overhyped? First Results on a Real Robot
Initial real-world experiments with VLAs on the SO-101, exploring the gap between few-shot expectations and practical performance.
subscribe via RSS