Deep learning ... we too are incorporating it into CATCH project
Photo credits @aunalytics
With last year being declared as the “Deep Learning Year”, it is a fair assumption that everyone has either heard or read about deep learning or deep neural networks. It has been a hot topic these past few years, often being highlight in the press, be it in print or online. Besides media, celebrities were interested to see what they could do or how they could make use of deep learning in their projects. One of them was Kristen Stewart, who co-authored a paper on style transfer. Another example that got a big media coverage, is in game playing, specifically in the game of Go, a board game invented in China. Google’s DeepMind AlphaGo program defeated the Go master, Lee Sadol in March 2016 and currently Nr. 1 ranking player Ke Jie in May 2017, 4:1 and 3:0 respectively. Recently, the HBO’s TV series Silicon Valley, used deep learning algorithms to develop a mobile app to identify hotdog (basically it is a simple image classification problem). One of the people who worked on the app, shared in Medium what they used to build it. The application was only for the series but it is now available to public on Google Play and App Store. Here is a fragment from the show explaining how it works (credit @HBO):
Other applications of deep learning are in machine translation (the famous Google translate we all use when we travel), speech recognition and speech synthesis, automatic text generation, art generation etc.. To know more about the variety applications of deep learning one could simply do a quick Internet search. However, Jurgen Schmidhuber’s article, Deep Learning in neural networks: An overview, is the right place to start. It is a historical survey summarizing relevant work starting as early as the 1940s. For those who really want to get their hands dirty with neural networks, there are online classes in practically all MOOCs, MIT has an online book and open lectures, and the online community, which I would say is mostly consisted of researchers, shares tutorials and answers questions in all websites dedicated to these kind of problems (StackOverflow, Github, StackExchange, Medium etc. ).
As in any research field, deep learning has its branches. There are challenges and competitions in probably all of them, some organized by researchers (mainly people working in academia and academic research groups), and some organized by companies such as Kaggle. They provide the datasets, so competitors do not need to worry about the data, they should worry only how to progress the field. While I was reading a thread in Reddit about deep learning I came across this comment, which I think is very much on point:
Research is mostly gradient based, through many many iterations by many many people. Go name the weights … (@pilooch)
Well, my research is focused on the use of deep learning in healthcare applications. Currently I am working together with the deep learning team at Innovative and Emerging Technologies lab at University of Louisville, Louisville, Kentucky, USA, on colonoscopy image segmentation and classification. This is the result of my visit to UofL during the summer. We are training a neural network model to detect colon polyps and compare the results with other state-of-the-art methods. In this study we are considering three polyps datasets that are public and one that is private. The private dataset is a collaboration between University of Deusto and a hospital in Bilbao, Spain.
In the long run, I will be applying deep learning methods in cancer patients’ care pathways, allowing them to have an optimized and personalized care pathway. Every person is unique, having a tailored and personalized care might help these patients overcome difficulties during diagnosis, treatment and the follow-up period.