I’ve set myself the task over the next few weeks of refining an image processing system in @Matlab which will become our core engine in scanning data handling (MRI, CT etc.) for prognostics and diagnostics.
My guess is that a hybrid feature extraction / deep learning algorithm with an outer evolutionary algorithm training optimizer is going to be the solution, hopefully I’ll report back on this before the end of August.
To achieve this end, I’ve been putting together a workstation cluster which combines the best features of GPU processing to support Deep Learning, with Intel Xeon Phi Co-Processors to handle maths offload from the CPUs running Matlab.
I’ve been all around the houses with operating systems including CentOS and the lovely Ubuntu, and surprisingly ended up running Windows 7. The hardware and driver combinations are just too flaky under Linux. Plus I’ve been a Mac user too long to type essays into a terminal in order to get something to work. So here we are, the servers are set up in a deserted lab (the cooling fans are like jet engines) and I’m remote accessing them via Windows shared desktops
I’ve been so immersed in other aspects of AI, particularly optimisation and search for the last few years that ‘Deep Learning’ had to a certain degree passed me by. However, it seems to be mostly the computational hardware which has made the step change. Tuning multiple hidden layers in an NN looks very familiar even after an intervening number of years. The results are stunning and I’m looking forward to reporting back.
Never has ‘No Free Lunch’ been more apposite………………