Latest in Robotics news thread

FAA trials show willingness to let drones fly out of sight of operators

The US Federal Aviation Administration has taken one step back from earlier opposition to letting drones fly when pilots can't see them.
Drones, also called unmanned aerial vehicles (UAVs) or unmanned aircraft systems (UASs), are a hot item in technology circles. Entrepreneurs and big businesses want to use drones for chores like taking real estate photos and shooting movies.
But some other uses, like checking miles of oil pipeline or delivering packages, require drones to fly beyond operators' sight. That wouldn't be allowed under the draft drone regulations the FAA proposed in February.
On Wednesday, though, the FAA announced industry partnerships that signal the agency could be willing to let drone operators stretch their wings more. The projects will evaluate drones operated beyond the pilot's line of sight, with drone maker PrecisionHawk testing the aircraft for crop monitoring and BNSF Railroad exploring what's necessary to control drones used to inspect railroads.
 
Amazon working on drones that will deliver items to wherever you are
By Nick Lavars
May 10, 2015


Drone deliveries hey? What could be more convenient than having the milk for your cereal arrive fresh each morning, or that forgotten dinner ingredient plonked down on the doorstep just as you fire up the stove? Well, details now revealed in an Amazon patent application suggest that if its Prime Air drones do materialize, they mightn't just be limited to making house calls. The application outlines plans for drones that track a customer's GPS position, flagging the possibility of having items brought to you even when you're out and about.
 
Thought-controlled robotic arm reacts faster and more smoothly than its predecessors
By Ben Coxworth
May 22, 2015


Although we've definitely seen a number of thought-controlled prosthetic arms before, most of those have been activated by implants in the user's motor cortex, which is the brain's movement-control center. The arms' resulting movements have been somewhat jerky, plus there's typically been a delay between the user thinking about moving the arm, and the actual movement taking place. Now, however, a team of researchers has announced the results of an experiment in which those limitations were greatly reduced
 
MIT’s Humanoid Robot Goes to Robo Boot Camp


As one of the Darpa Robotics Challenge’s 25 robot finalists, Atlas will be representing Tedrake’s team at the 2015 challenge in Pomona, California in two weeks. Its purpose in life—along with the other finalists—is to be the best search-and-rescue robot possible. In terrain too dangerous for humans to traverse, a robot that can lift hundreds of pounds and work power tools could save lives without endangering others. The challenge will put those skills to the test.

MIT’s Atlas won’t be the only one with the weight of the world on its shoulders come June. Tedrake’s group is competing against five other Atlases, each running different software and with a few physical modifications to the same body type. Google-owned robotics company Boston Dynamics made Atlas—except for its hands, which come from Robotiq—and donated it to MIT for the competition. In order to win $2 million, MIT’s robot will have one hour to open a door, turn a valve, cut a hole in a wall using a power drill, walk up some stairs, traverse rocky, unstable ground, and handle a surprise task. Oh, and it has to drive a car.
 
What’s in This Picture? AI Becomes as Smart as a Toddler


Artificial intelligence has graduated past the infancy stage of figuring out what's in an image. Computers have previously been capable of little more than a simple game of I Spy: Name a specific object or person, and they'll show you an image containing it. But thanks to new developments in AI research, machines can now answer more complex questions, like, “What is there on the grass, except the person?” (For the answer to that awkwardly worded enigma, take a look at the last image.)
A research paper published on Thursday in Cornell University's Arxiv outlines a system that learns to identify fine-grained visual features of images, and the words associated with them. Then it combines the two into a dictionary in its digital brain. It then references this to answer new questions about never-before-seen images.
 
What’s in This Picture? AI Becomes as Smart as a Toddler


Artificial intelligence has graduated past the infancy stage of figuring out what's in an image. Computers have previously been capable of little more than a simple game of I Spy: Name a specific object or person, and they'll show you an image containing it. But thanks to new developments in AI research, machines can now answer more complex questions, like, “What is there on the grass, except the person?” (For the answer to that awkwardly worded enigma, take a look at the last image.)
A research paper published on Thursday in Cornell University's Arxiv outlines a system that learns to identify fine-grained visual features of images, and the words associated with them. Then it combines the two into a dictionary in its digital brain. It then references this to answer new questions about never-before-seen images.

Matthew- why do you want little Irish boys and girls to get AIDs?
 
What’s in This Picture? AI Becomes as Smart as a Toddler


Artificial intelligence has graduated past the infancy stage of figuring out what's in an image. Computers have previously been capable of little more than a simple game of I Spy: Name a specific object or person, and they'll show you an image containing it. But thanks to new developments in AI research, machines can now answer more complex questions, like, “What is there on the grass, except the person?” (For the answer to that awkwardly worded enigma, take a look at the last image.)
A research paper published on Thursday in Cornell University's Arxiv outlines a system that learns to identify fine-grained visual features of images, and the words associated with them. Then it combines the two into a dictionary in its digital brain. It then references this to answer new questions about never-before-seen images.

Matthew- why do you want little Irish boys and girls to get AIDs?

Do not reply to this, but quite simply if they take part in that kind of sex they're far more likely to get it. That is a scientific fact backed up by dozens of studies and mountains of data.
 
What’s in This Picture? AI Becomes as Smart as a Toddler


Artificial intelligence has graduated past the infancy stage of figuring out what's in an image. Computers have previously been capable of little more than a simple game of I Spy: Name a specific object or person, and they'll show you an image containing it. But thanks to new developments in AI research, machines can now answer more complex questions, like, “What is there on the grass, except the person?” (For the answer to that awkwardly worded enigma, take a look at the last image.)
A research paper published on Thursday in Cornell University's Arxiv outlines a system that learns to identify fine-grained visual features of images, and the words associated with them. Then it combines the two into a dictionary in its digital brain. It then references this to answer new questions about never-before-seen images.

Matthew- why do you want little Irish boys and girls to get AIDs?

Do not reply to this, but quite simply if they take part in that kind of sex they're far more likely to get it. That is a scientific fact backed up by dozens of studies and mountains of data.

And Matthew- why do you want little Irish boys and girls to get AID's other than your butt hurt that the citizens of Ireland rejected your hate mongery?
 
MIT's robotic cheetah can now leap over obstacles
By Nick Lavars
May 28, 2015
2 Pictures

The last time we heard from the researchers working on MIT's robotic cheetah project, they had untethered their machine to let it bound freely across the campus lawns. Wireless and with a new spring in its step, the robot hit speeds of 10 mph (6 km/h) and could jump 13 in (33 cm) into the air. The quadrupedal robot has now been given another upgrade in the form of a LIDAR system and special algorithms, allowing it to detect and leap over obstacles in its path.
 
Thought vectors’ could revolutionize artificial intelligence

Quote
Despite all the recent hullabaloo concerning artificial intelligence, in part fueled by dire predictions made by the likes of Stephen Hawking and Elon Musk, there have been few breakthroughs in the field to warrant such fanfare. The artificial neural networks that have caused so much controversy are a product of the 1950s and 60s, and remain relatively unchanged since then. The strides forward made in areas like speech recognition owe as much to improved datasets (think big data) and faster hardware than to actual changes in AI methodology. The thornier problems, like teaching computers to do natural language processing and leaps of logic remain nearly as intractable now as they were a decade ago.

This may all be about to change. Last week, the British high priest of artificial intelligence Professor Geoffrey Hinton, who was snapped up by Google two years back during its massive acquisition of AI experts, revealed that his employer may have found a means of breaking the AI deadlock that has persisted in areas like natural language processing.
 
http://sciencefriday...telligence.html


This week scientists unveiled a robot that can sustain injury to one of its six legs, think for a few minutes, and devise a more efficient way to walk—by essentially “limping” away as fast as possible. Jeff Clune and his colleagues accomplished the feat by endowing the robot with what they call a "simulated childhood" of possible motions, and letting the robot figure out the rest.


https://www.youtube....h?v=T-c17RKh3uE


How far can this sort of robotic thought go? According to computer scientist Ashok Goel, if robots have unlimited capability to learn, "why would there be a limit to emotional intelligence?" After all, he says, humans aren’t born with a full set of emotional and ethical intelligence—children learn it by observing adults.
 
Highlights from the ICRA 2015 robotics conference in Seattle
By David Szondy
May 30, 2015
61 Pictures

The city of Seattle saw a robotic population explosion this week as the 2015 IEEE International Conference on Robotics and Automation (ICRA) descended on the Washington State Convention Center. The IEEE Robotics and Automation Society’s flagship conference ran the gamut of all things robotic, from showcases of new technology to forums on government policies as they relate to robotics. Here's our look at the highlights.
 
Robots may soon replace sweatshop workers

HUMAN hands are extremely good at making clothes. While many manufacturing processes have been automated, stitching together garments remains a job for millions of people around the world. As with most labour-intensive tasks, much of the work has migrated to low-wage countries, especially in Asia. Factory conditions can be gruelling. As nations develop and wages rise, the trade moves on to the next cheapest location: from China, to Bangladesh and, now that it is opening up, Myanmar. Could that migration be about to end with the development of a robotic sewing machine?
There have been many attempts to automate sewing. Some processes can now be carried out autonomously: the cutting of fabric, for instance, and sometimes sewing buttons or pockets. But it is devilishly difficult to make a machine in which fabric goes in one end and finished garments, such as jeans and T-shirts, come out the other. The particularly tricky bit is stitching two pieces of material together. This involves aligning the material correctly to the sewing head, feeding it through and constantly adjusting the fabric to prevent it slipping and buckling, while all the time keeping the stitches neat and the thread at the right tension. Nimble fingers invariably prove better at this than cogs, wheels and servo motors.
 
Robot broccoli harvester could cut cost of eating your greens
Three-dimensional camera technology from the University of Lincoln is helping in the development of a fully automated robotic system that can harvest broccoli.

The project, which is jointly funded by BBSRC and Innovate UK, will test whether 3D camera technology can be used to identify and select when broccoli is ready for harvesting. This will be a key step towards the development of a fully automatic robotic harvesting system for broccoli, which will significantly reduce production costs.
The University of Lincoln was one of more than 70 UK businesses and universities to share funding through the £70m Agri-Tech Catalyst, which aims to improve the development of agricultural technology in the UK.
Project leader Prof Tom Duckett, group co-ordinator of the Agri-food Technology Research Group at the University of Lincoln, said: “Broccoli is one of the world’s largest vegetable crops and is almost entirely manually harvested, which is costly.” This technology is seen as being an important move towards developing fully automatic robot harvesting systems, which could then be used for a variety of different crops.
 

Forum List

Back
Top