The ability to phenotype cells is fundamentally important in biological research and medicine. Current methods of phenotyping cells rely primarily on flow cytometry to detect specific fluorescent markers. There are many situations where this this approach is undesirable, such as problems with availability, specificity, cross-reactivity and cost of phenotyping markers. Furthermore, the number of markers required can increase the complexity, may exceed the detection limit, and even activate or decrease the viability of some cells. Finally, cells that are non-spherical or are too few in number are sometimes not compatible with flow cytometry. For these reasons, alternate methods for phenotyping are sought after, with the focus on live-cell imaging. Here, we investigate the potential to develop an “electronic eye” to phenotype cells directly from brightfield and non-specific fluorescence microscopy images. Cells from ten cancer cell lines (MCF7, MDA-MB-231, LNCaP, PC3, U2OS, HCT, THP-1, HL60, Jurkat and Raji) were non-specifically stained to identify their nucleus (Hoechst), cytoplasm (Calcein green), and actin filaments (SiR actin). Cells were dispensed into 96-well glass-bottomed imaging plates and then imaged at 10X using brightfield and three fluorescence channels. The microscopy images were segmented into four-channel 51x51 pixel images each containing a single cell. The segmentation process used the DAPI channel for locating individual nuclei and then used a combination of the other channels to ensure other cells, or debris, were not in close proximity. To phenotype the cells, we developed a convolutional neural network (CNN) consisting of a four-channel input, four convolutional layers, one max-pooling layer, five fully connected layers, two dropout layers and seven batch normalization layers. ReLU was used as the activation function following each convolution or fully connected layer. Each node in the final fully-connected layer represented the probability of the imaged cell belonging to one of the known cell-lines. Softmax was utilized as a cross-entropy classifying error function for back-propagation during training. Our CNN was trained over 20 epochs with Adam optimization and dropout to avoid overfitting and rotational data augmentation to expand the dataset. Using five-fold cross-validation, we show that the CNN was able to recognize each cell line with a 94% average accuracy. Our results demonstrate the ability to use deep-learning to phenotype cells directly from microscopy images without specific markers. This capability will be valuable for situations where phenotyping markers are unavailable or the cell sample cannot be stained (such as prior to therapeutic use). We envision this approach to be a general method for identifying cell types directly from image data in order to identify the emergence of phenotypic shifts or new cell types.