Risky teleoperation, Rocket League simulation and zoologist multiplication – TechCrunch


The research in the area of ​​machine learning and AI, now a key technology in virtually every industry and business, is far too voluminous for anyone to read it all. This column, Perceptron (formerly Deep Science), aims to bring together some of the most relevant recent discoveries and papers – particularly, but not limited to, artificial intelligence – and explain why they matter.

This week in AI, researchers discovered a method that could allow adversaries to track the movements of remote-controlled robots even when the robots’ communications are end-to-end encrypted. The co-authors, from the University of Strathclyde in Glasgow, said their study shows that adopting cybersecurity best practices is not enough to stop attacks on autonomous systems.

Remote control, or teleoperation, promises to allow operators to guide one or more robots remotely in various environments. Startups such as Pollen Robotics, Beam and Tortoise have demonstrated the usefulness of remotely operated robots in grocery stores, hospitals and offices. Other companies are developing remote-controlled robots for tasks such as demining or monitoring heavily irradiated sites.

But the new research shows that teleoperation, even when supposedly “secure”, is risky in its susceptibility to surveillance. The Strathclyde co-authors describe in a paper the use of a neural network to infer information about operations that a remote-controlled robot is performing. After collecting samples of TLS-protected traffic between the robot and the controller and performing analysis, they found that the neural network could identify movements about 60% of the time and also reconstruct “warehousing workflows”. (eg, picking up packages) with “a precision.”

Picture credits: Shah et al.

A new study from researchers at Google and the University of Michigan is alarming and less immediate. The work surveyed “financially stressed” Indian users of instant lending platforms that target borrowers whose credit is determined by risk modeling AI. According to the co-authors, users experienced a sense of indebtedness for the “benefit” of instant loans and the obligation to accept harsh terms, over-share sensitive data and pay high fees.

The researchers say the findings illustrate the need for greater “algorithmic accountability,” especially when it comes to AI in financial services. “We argue that accountability is shaped by platform-user power relations, and urge policymakers to take a purely technical approach to fostering algorithmic accountability,” they wrote. “Instead, we call for situated interventions that enhance user agency, enable meaningful transparency, reconfigure designer-user relationships, and spark critical reflection in practitioners toward greater accountability.”

In less austere research, a team of scientists from TU Dortmund University, Rhine-Waal University and LIACS Universiteit Leiden in the Netherlands have developed an algorithm they claim can “solve” the game Rocket League. Motivated to find a less computationally intensive way to create gaming AI, the team leveraged what it calls a “sim-to-sim” transfer technique, which trained the AI ​​system to perform in-game tasks like goalkeeping and hitting in a lean, streamlined version of Rocket League. (Rocket League basically looks like indoor soccer, except with cars instead of human players in teams of three.)

Rocket League AI

Picture credits: Pleines et al.

It wasn’t perfect, but the researchers’ Rocket League game system was able to save nearly every shot fired when on goalie. When on offense, the system managed to score 75% of shots – a respectable record.

Human motion simulators are also progressing at a steady pace. Meta’s work on tracking and simulating human limbs has obvious applications in its AR and VR products, but it could also be used more widely in robotics and embodied AI. Research published this week was tipped by none other than Mark Zuckerberg.

Skeleton and muscle groups simulated in Myosuite.

Skeleton and muscle groups simulated in Myosuite.

MyoSuite simulates muscles and skeletons in 3D as they interact with objects and themselves. It is important for agents to learn how to hold and handle objects correctly without crushing or dropping them, and also in a virtual world, they provide realistic grips and interactions. It is supposed to perform thousands of times faster on certain tasks, allowing simulated learning processes to occur much faster. “We’re going to make these models open source so researchers can use them to advance the field,” Zuck says. And they did!

Many of these simulations are agent- or object-based, but this MIT project aims to simulate a global system of independent agents: self-driving cars. The idea is that if you have a good number of cars on the road, you can make them work together not only to avoid collisions, but also to avoid idling and unnecessary stops at lights.

Animation of cars slowing down at a 4 way intersection with a red light.

If you look closely, only the cars in front really stop.

As you can see in the animation above, a set of autonomous vehicles communicating using v2v protocols can essentially keep all but the first cars from stopping by gradually slowing down one behind the other, but not to the point of stopping. . This kind of hypermiling behavior might not seem like it saves much gas or battery, but when you extend it to thousands or millions of cars, it makes a difference – and it could also be a more comfortable ride. Good luck getting everyone to approach the perfectly spaced intersection like that, though.

Switzerland look good and long using 3D scanning technology. The country is making a huge map using lidar-equipped drones and other tools, but there’s a catch: the drone’s movement (deliberate and accidental) introduces an error in the point map that needs to be manually corrected . It’s not a problem if you only scan one building, but an entire country?

Fortunately, an EPFL team is embedding an ML model directly into the lidar capture stack that can determine when an object has been scanned multiple times from different angles and use that information to align the point map into a single, consistent mesh. This news article isn’t particularly illuminating, but the accompanying document goes into more detail. An example of the resulting map can be seen in the video above.

Finally, in some unexpected but very nice AI news, a team at the University of Zurich has designed an algorithm to track animal behavior so zoologists don’t have to sift through weeks of footage to find the two examples of courtship dances. This is a collaboration with Zurich Zoo, which makes sense considering the following: “Our method can recognize even subtle or rare behavioral changes in research animals, such as signs of stress, anxiety or discomfort,” said Mehmet Fatih Yanik, lab director.

Thus, the tool could be used both for learning and tracking behaviors in captivity, for the welfare of captive animals in zoos, and for other forms of animal studies. They could use fewer subject animals and get more information in less time, with less work from graduate students poring over video files late at night. Sounds like a win-win-win situation to me.

Illustration of monkeys in a tree being analyzed by an AI.

Picture credits: Ella Marushenko / ETH Zurich

Also, I love the illustration.

Tech

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button