Watch a Robot Stuff Cash Into a Wallet Just Like You Do

[analyse_image type=”featured” src=”https://www.cnet.com/a/img/resize/6f88e1647cf5e5f321b9b7eff2116a158c18aa99/hub/2026/04/10/882e90be-b1c2-4168-8a0f-d89cbec60d49/screenshot-2026-04-10-at-18-31-56.png?auto=webp&fit=crop&height=675&width=1200″]

In 2026, we’re seeing robots progress by leaps and bounds with markedly improved dexterity, the kind of progress long needed in the quest for truly useful household helpers. Now a new AI model has arrived to power robots through activities, including folding laundry, constructing boxes, fixing other robots and even filling wallets with flimsy paper money.

Earlier this month, California-based company Generalist AI released Gen-1, a new physical AI model that makes robots capable of performing all of these tasks (and more) with success. It’s a big step forward in terms of robots designed for the real world based on intelligence born from the real world, Pete Florence, co-founder and CEO of Generalist AI told me.

In most of the example videos published by the company, Gen-1 is seen running on a pair of robotic arms, but that’s not all it’s built for. “Gen-1 is designed to be the brain of any robot, meaning the same model can run on a humanoid, an industrial arm or other robotic systems,” said Florence.

Already, this has proved to be a breakthrough year for general-purpose humanoid robots, with companies including Boston Dynamics and Honor unveiling cutting-edge bots capable of uncannily humanlike movements. The market for robots is expected to explode, with one estimate from Morgan Stanley predicting growth to a $5 trillion market by 2050. Predictions see robots coming for industry, retail, hospitality and care environments before eventually landing in our homes. To get us there, we need to see further advances in AI.

Training robots to live alongside humans

Over the past few years we’ve seen large language models, such as ChatGPT, Gemini and Claude, evolve at lightning speed. The same hasn’t been true of the physical AI models required to power robots, in large part because of a lack of data to train those models on. Robots — and especially humanoid robots — must learn to navigate a world built for humans just as a human would.

Often this data is collected from robots performing tasks while being teleoperated by humans, but not Gen-1. Instead, the dataset used to train Generalist AI’s models has been assembled by humans completing millions of different tasks using wearable technology.

“We built our own lightweight ‘data hands’ and distributed them globally to learn how people actually interact with objects, with all the subtle force feedback, tactile feel, slips, corrections and recoveries that define human dexterity in the real world,” said Florence. “That kind of data is critical for teaching robots physical common sense, the intuitive understanding and ability to adapt in real time rather than execute rigid instructions.”

Generalist AI has released a series of videos showing the model running on robots repetitively performing a range of different tasks, with the most compelling, perhaps, being a robot drawing cash out of a wallet before reinserting it into the same pocket. This is a fiddly task that many humans fumble over. It’s clearly not easy for the robot, either, given the flimsiness of the paper money and the fabric of the wallet — and yet it completes the task.

Another video shows a robot sorting socks by color, folding them in neat piles and counting the number of pairs using a touchscreen. Other tricky tasks the model can complete include unzipping and filling a pencil case with pens, stacking oranges in a neat pyramid and plugging in an Ethernet cable.

These videos show the breadth of Gen-1’s capabilities, but more impressive is the success rate with which it can complete certain tasks. Generalist AI measured the model’s hit rate against the previous version and found Gen-1 could successfully service a robot vacuum cleaner in 99% of cases (up from 50% for Gen-0), fold boxes in 99% of cases (up from 81% for Gen-0) and package up phones in 99% of cases (up from 62% for Gen-0).

Robots do improv

Most robots are programmed to complete a task in a specific and orderly way. But what happens when a curve ball gets thrown? “The smallest changes in the environment can cause failures,” said Florence.

An important skill robots need, which humans innately possess, is the ability to think on their feet. This is why Gen-1 has been designed with improvisation in mind so it can come up with strategies to complete tasks. Florence gives me an example of a robot using two hands to reposition an awkwardly placed part for an automotive task, even though it has only been trained to use one. 

“This kind of creativity has been largely absent from robotics until now,” he said.

Significant work still needs to be done when it comes to beefing up robots’ improv chops, but early progress show glimpses of a positive impact on both reliability and speed, says Florence. “We’re beginning to see real progress and are excited to push the boundaries of embodied intelligence.” 

After all, there may come a day when you need a robot in your house that can fix all your other smaller robots.  

Computing Guides


Laptops
  • Best Laptop
  • Best Chromebook
  • Best Budget Laptop
  • Best Cheap Gaming Laptop
  • Best 2-in-1 Laptop
  • Best Windows Laptop
  • Best Macbook
  • Best Gaming Laptop
  • Best Macbook Deals

Desktops & Monitors
  • Best Desktop PC
  • Best Gaming PC
  • Best Monitor Under 200
  • Best Desktop Deals
  • Best Monitors
  • M2 Mac Mini Review

Computer Accessories
  • Best PC Speakers
  • Best Printer
  • Best External Hard Drive SSD
  • Best USB C Hub Docking Station
  • Best Keyboard
  • Best Webcams
  • Best Mouse
  • Best Laptop Backpack

Photography
  • Best Camera to Buy
  • Best Vlogging Camera
  • Best Tripod
  • Best Waterproof Camera
  • Best Action Camera
  • Best Camera Bag and Backpack
  • Best Drone

Tablets & E-Readers
  • Best E-Ink Tablets
  • Best iPad Deals
  • Best iPad
  • Best E-Reader
  • Best Tablet
  • Best Android Tablet

3D Printers
  • Best 3D Printer
  • Best Budget 3D Printer
  • Best 3D Printing Filament
  • Best 3D Printer Deals

In 2026, we’re seeing robots progress by leaps and bounds with markedly improved dexterity, the kind of progress long needed in the quest for truly useful household helpers. Now a new AI model has arrived to power robots through activities, including folding laundry, constructing boxes, fixing other robots and even filling wallets with flimsy paper money.

Earlier this month, California-based company Generalist AI released Gen-1, a new physical AI model that makes robots capable of performing all of these tasks (and more) with success. It’s a big step forward in terms of robots designed for the real world based on intelligence born from the real world, Pete Florence, co-founder and CEO of Generalist AI told me.

In most of the example videos published by the company, Gen-1 is seen running on a pair of robotic arms, but that’s not all it’s built for. “Gen-1 is designed to be the brain of any robot, meaning the same model can run on a humanoid, an industrial arm or other robotic systems,” said Florence.

Already, this has proved to be a breakthrough year for general-purpose humanoid robots, with companies including Boston Dynamics and Honor unveiling cutting-edge bots capable of uncannily humanlike movements. The market for robots is expected to explode, with one estimate from Morgan Stanley predicting growth to a $5 trillion market by 2050. Predictions see robots coming for industry, retail, hospitality and care environments before eventually landing in our homes. To get us there, we need to see further advances in AI.

Training robots to live alongside humans

Over the past few years we’ve seen large language models, such as ChatGPT, Gemini and Claude, evolve at lightning speed. The same hasn’t been true of the physical AI models required to power robots, in large part because of a lack of data to train those models on. Robots — and especially humanoid robots — must learn to navigate a world built for humans just as a human would.

Often this data is collected from robots performing tasks while being teleoperated by humans, but not Gen-1. Instead, the dataset used to train Generalist AI’s models has been assembled by humans completing millions of different tasks using wearable technology.

“We built our own lightweight ‘data hands’ and distributed them globally to learn how people actually interact with objects, with all the subtle force feedback, tactile feel, slips, corrections and recoveries that define human dexterity in the real world,” said Florence. “That kind of data is critical for teaching robots physical common sense, the intuitive understanding and ability to adapt in real time rather than execute rigid instructions.”

Generalist AI has released a series of videos showing the model running on robots repetitively performing a range of different tasks, with the most compelling, perhaps, being a robot drawing cash out of a wallet before reinserting it into the same pocket. This is a fiddly task that many humans fumble over. It’s clearly not easy for the robot, either, given the flimsiness of the paper money and the fabric of the wallet — and yet it completes the task.

Another video shows a robot sorting socks by color, folding them in neat piles and counting the number of pairs using a touchscreen. Other tricky tasks the model can complete include unzipping and filling a pencil case with pens, stacking oranges in a neat pyramid and plugging in an Ethernet cable.

These videos show the breadth of Gen-1’s capabilities, but more impressive is the success rate with which it can complete certain tasks. Generalist AI measured the model’s hit rate against the previous version and found Gen-1 could successfully service a robot vacuum cleaner in 99% of cases (up from 50% for Gen-0), fold boxes in 99% of cases (up from 81% for Gen-0) and package up phones in 99% of cases (up from 62% for Gen-0).

Robots do improv

Most robots are programmed to complete a task in a specific and orderly way. But what happens when a curve ball gets thrown? “The smallest changes in the environment can cause failures,” said Florence.

An important skill robots need, which humans innately possess, is the ability to think on their feet. This is why Gen-1 has been designed with improvisation in mind so it can come up with strategies to complete tasks. Florence gives me an example of a robot using two hands to reposition an awkwardly placed part for an automotive task, even though it has only been trained to use one. 

“This kind of creativity has been largely absent from robotics until now,” he said.

Significant work still needs to be done when it comes to beefing up robots’ improv chops, but early progress show glimpses of a positive impact on both reliability and speed, says Florence. “We’re beginning to see real progress and are excited to push the boundaries of embodied intelligence.” 

After all, there may come a day when you need a robot in your house that can fix all your other smaller robots.  

In 2026, we’re seeing robots progress by leaps and bounds with markedly improved dexterity, the kind of progress long needed in the quest for truly useful household helpers. Now a new AI model has arrived to power robots through activities, including folding laundry, constructing boxes, fixing other robots and even filling wallets with flimsy paper money.

Earlier this month, California-based company Generalist AI released Gen-1, a new physical AI model that makes robots capable of performing all of these tasks (and more) with success. It’s a big step forward in terms of robots designed for the real world based on intelligence born from the real world, Pete Florence, co-founder and CEO of Generalist AI told me.

In most of the example videos published by the company, Gen-1 is seen running on a pair of robotic arms, but that’s not all it’s built for. “Gen-1 is designed to be the brain of any robot, meaning the same model can run on a humanoid, an industrial arm or other robotic systems,” said Florence.

Already, this has proved to be a breakthrough year for general-purpose humanoid robots, with companies including Boston Dynamics and Honor unveiling cutting-edge bots capable of uncannily humanlike movements. The market for robots is expected to explode, with one estimate from Morgan Stanley predicting growth to a $5 trillion market by 2050. Predictions see robots coming for industry, retail, hospitality and care environments before eventually landing in our homes. To get us there, we need to see further advances in AI.

Training robots to live alongside humans

Over the past few years we’ve seen large language models, such as ChatGPT, Gemini and Claude, evolve at lightning speed. The same hasn’t been true of the physical AI models required to power robots, in large part because of a lack of data to train those models on. Robots — and especially humanoid robots — must learn to navigate a world built for humans just as a human would.

Often this data is collected from robots performing tasks while being teleoperated by humans, but not Gen-1. Instead, the dataset used to train Generalist AI’s models has been assembled by humans completing millions of different tasks using wearable technology.

“We built our own lightweight ‘data hands’ and distributed them globally to learn how people actually interact with objects, with all the subtle force feedback, tactile feel, slips, corrections and recoveries that define human dexterity in the real world,” said Florence. “That kind of data is critical for teaching robots physical common sense, the intuitive understanding and ability to adapt in real time rather than execute rigid instructions.”

Generalist AI has released a series of videos showing the model running on robots repetitively performing a range of different tasks, with the most compelling, perhaps, being a robot drawing cash out of a wallet before reinserting it into the same pocket. This is a fiddly task that many humans fumble over. It’s clearly not easy for the robot, either, given the flimsiness of the paper money and the fabric of the wallet — and yet it completes the task.

Another video shows a robot sorting socks by color, folding them in neat piles and counting the number of pairs using a touchscreen. Other tricky tasks the model can complete include unzipping and filling a pencil case with pens, stacking oranges in a neat pyramid and plugging in an Ethernet cable.

These videos show the breadth of Gen-1’s capabilities, but more impressive is the success rate with which it can complete certain tasks. Generalist AI measured the model’s hit rate against the previous version and found Gen-1 could successfully service a robot vacuum cleaner in 99% of cases (up from 50% for Gen-0), fold boxes in 99% of cases (up from 81% for Gen-0) and package up phones in 99% of cases (up from 62% for Gen-0).

Robots do improv

Most robots are programmed to complete a task in a specific and orderly way. But what happens when a curve ball gets thrown? “The smallest changes in the environment can cause failures,” said Florence.

An important skill robots need, which humans innately possess, is the ability to think on their feet. This is why Gen-1 has been designed with improvisation in mind so it can come up with strategies to complete tasks. Florence gives me an example of a robot using two hands to reposition an awkwardly placed part for an automotive task, even though it has only been trained to use one. 

“This kind of creativity has been largely absent from robotics until now,” he said.

Significant work still needs to be done when it comes to beefing up robots’ improv chops, but early progress show glimpses of a positive impact on both reliability and speed, says Florence. “We’re beginning to see real progress and are excited to push the boundaries of embodied intelligence.” 

After all, there may come a day when you need a robot in your house that can fix all your other smaller robots.  

[analyse_source url=”http://cnet.com/tech/computing/gen-1-physical-ai-model-robot-dexterity/”]


Analyse


Post not analysed yet. Do the magic.