If you want to sound clever and entrepreneurial, just slip the words “machine learning”, “neural networks”, or “big data” into your conversations. Most people will pretend to know what you’re talking about and sound impressed. It makes it even easier, then, to simply re-label existing quantitative work as “machine learning” or “big data” and enjoy the halo effect. I should of course add that all my research is based on deep learning techniques using big data 🙂
Given all of this I’ve been trying to find a good example where artificial intelligence (AI) has shown signs of making a difference outside of the tech sector. That’s how I stumbled across “Doctor AI”. Edward Choi and Mohammad Taha Bahadori from Georgia Tech and colleagues created this “doctor” by applying a recurrent neutral network on a large electronic health record data-set. The purpose of “Doctor AI” was to predict what ailment and medication a patient would need at their next visit to the doctors.
Though all of you will know what a neural network is, I’ll humour you by describing what one is. A conventional predictive model, such as a regression, directly maps an input to an output (so if the input is 10 and the “mapping” is 200% then the output is 20). A neural network meanwhile introduces hidden layers (sometimes called a memory) between the inputs and outputs which can apply different transformations along the way to get to the right output. The neutral network learns by itself by comparing the predicted output to the actual output in its back-test (or training set) and then working out backwards what the inputs should have been to get the right answer. Neural networks allows all sorts of non-linear and more complex mappings from input to output.
Back to “Doctor AI”, for inputs and outputs, the team derived codes for medical diagnoses (1,183 in total) and medications (595 in total). They used data for 263,706 patients who an average had 55 visits per person over 8 years. They used the records of 85% of patients to back-test/train the neural network and the remaining 15% was the test set. The results found that “Doctor AI” could correctly predict 80% of the actual ailments that were diagnosed at the patients next visit.
Of course, “Doctor AI” still misdiagnosed 20% of ailments which I’m not sure a patient would be happy with. Moreover, “Doctor AI” needed the human inputted medical records. So we’re still far away from “Doctor AI” replacing human doctors. But it could provide a powerful tool for doctors to have. They could use it to see the likely ailments upcoming patients may have and the surgery could prepare. It suggests that the future lies not with a paradigm of AI OR humans, but rather AI AND humans. At least that’s what I’m telling myself. Did I tell you about the machine learning I’m doing…