Traditional microscopes used for automated imaging and analysis sets one aback with tens of thousands if not hundreds of thousands of dollars. This limits number of microscopes a lab can afford, hence limiting the number of parallel experiments that can be performed. We present a novel approach by combining low-cost, low-resolution microscopes with advanced computational imaging methods that can extract high-resolution image information in the post processing. In addition, we implement novel machine learning methods to jointly optimize the automation task, e.g. cell segmentation, and the data acquisition process, e.g. illumination pattern, to capture less data without losing the performance of the automated task. Our initial prototype costing ~$150 employed a Raspberry Pi as the computer and a modified Raspberry Pi V2 camera as the low-resolution microscope. A low-cost 16x16 LED array developed for display is used to illuminate the sample and 3D printed parts are used for assembly. LEDs in the array are sequentially illuminated to capture 256 low-resolution images, where the high-resolution information is encoded within these low-resolution images using the aperture synthesis concepts. The captured 256 low-resolution images were combined to achieve 0.8µm resolution, for the first time in a low-cost setting, across 4 mm2 field-of-view. The phase of the object is also recovered in the process, making this suitable for imaging cell cultures without any need of staining. In the latest developments, we implemented a new machine learning model to multiplex the illumination to reduce the number of images captured to two, without any loss in performance for tasks such as cell segmentation or detecting malaria infection. This also reduces the image processing time and, exploiting the increasing computing performance on opensource hardware such as Raspberry Pi and Google’s Coral edge TPU, we are currently working towards achieving real-time machine learning based automation on our portable low-cost setup. The 3D printed design of our microscope can be easily modified to the specific requirements of a lab, e.g. imaging stress fibre reorientation in cells under mechanical stimuli require a different setup compared to imaging cell confluency in a petri-dish. Our optics and algorithms still stay valid for all these different configurations and the required modifications in the 3D printed designs are usually minor. This is not possible with commercial systems which are designed for a limited number of imaging applications. Combining latest developments in machine learning makes our approach a powerful tool for laboratory automation and diagnostics in low-resource settings.