Deep Learning and Robots

Deep Learning and Robots
Jun 17


The US Center for Disease Control and Prevention estimates that 29.1 million people in the US have diabetes and the World Health Organization estimates that 347 million people have the disease worldwide. Diabetic Retinopathy (DR) is an eye disease associated with long-standing diabetes. Around 40% to 45% of Americans with diabetes have some stage of the disease. Progression to vision impairment can be slowed or averted if DR is detected in time, however this can be difficult as the disease often shows few symptoms until it is too late to provide effective treatment.


Currently, detecting DR is a time-consuming and manual process that requires a trained clinician to examine and evaluate digital color fundus photographs of the retina. By the time human readers submit their reviews, often a day or two later, the delayed results lead to lost follow up, miscommunication, and delayed treatment.


Clinicians can identify DR by the presence of lesions associated with the vascular abnormalities caused by the disease. While this approach is effective, its resource demands are high. The expertise and equipment required are often lacking in areas where the rate of diabetes in local populations is high and DR detection is most needed. As the number of individuals with diabetes continues to grow, the infrastructure needed to prevent blindness due to DR will become even more insufficient.


The need for a comprehensive and automated method of DR screening has long been recognized, and previous efforts have made good progress using image classification, pattern recognition, and machine learning. With color fundus photography as input, the goal of this competition is to push an automated detection system to the limit of what is possible – ideally resulting in models with realistic clinical potential. The winning models will be open sourced to maximize the impact such a model can have on improving DR detection.


Archer developer competes for the Diabetic Retinopathy Detection grant hoping to make contributions to science.


As you may know, Deep learning sometimes called deep machine learning, deep structured learning, hierarchical learning, or sometimes DL is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using model architectures, with complex structures or structures composed of multiple non-linear transformations.


As the head of AI research at Facebook, Yann LeCun oversees the creation of vast “neural networks” that can recognize photos and respond to everyday human language. Similar work is driving speech recognition on Google’s Android phones, instant language translation on Microsoft’s Skype service, and so many other online tools that can “learn” over time. Using vast networks of computer processors, these systems approximate the networks of neurons inside the human brain, and in some ways, they can outperform humans themselves.


Deep learning will also extend beyond the internet - into devices that can operate in the  physical world—things like robots and self-driving cars. Recently researchers at the University of California at Berkeley presented a robotic system that uses deep learning tech to teach itself how to screw a cap onto a bottle. Early this year, big-name chip maker Nvidia and an Israeli company called Mobileye revealed that they were developing deep learning systems that can help power self-driving cars.


LeCun has been exploring similar types of “robotic perception” for over a decade, publishing his first paper on the subject in 2003. The idea was to use deep learning algorithms as a way for robots to identify and avoid obstacles as they moved through the world—something not unlike what’s needed with self-driving cars. “It’s now a very hot topic,” he says.


Google and many other big players have already demonstrated self-driving cars. But according to researchers, including LeCun, deep learning can advance other technologies - just as it has vastly improved technologies such as image recognition and speech recognition. Deep learning algorithms date back to the 1980s, but now that they can tap the enormously powerful network of machines available to today’s companies and research centers, they provide a viable way for systems to teach themselves tasks by analyzing enormous amounts of data.


Deep learning is particularly interesting, because it has transformed so many different areas of research. Nevertheless, deep learning and object recognition are not enough to make a smart robot. Algorithms go beyond conventional technologies to learn and identify objects by comparing them to other things they have learned in the past. Then the algorithm places all of the objects on a three dimensional map.


In the past, researchers used very separate techniques for speech recognition, image recognition, translation, and robotics. But now, one set of techniques - though a rather broad set—can be applied to all of these fields.


As a result, each of these fields are suddenly evolving at a much faster rate. Facial recognition has become mainstream and can immediately learn to recognize an object using an ordinary camera. Then, as the object moves, deep learning algorithms learn more about the object in different environments so the recognition gets even better.

Speech recognition has also advanced and these sorts of autonomous machines that Yann LeCun's team is working on, could reach the commercial market within the next five years.

This is interesting

  • 1 5 Myths about Embedded Systems Development
  • 2 Banking and Finance Software Development
  • 3 How to Develop Applications for the Internet of Things
  • 4 How to Build The Best Hospital Management Software
  • 5 What You Need to Know about HMI Development?

Want to Hire Us?

Contact Now