Automated in-field phenotyping technologies can significantly increase data throughput and reduce labor demand. However, the acquisition of under-canopy phenotypic data of corn/sorghum plants remains as a challenging task for existing platforms such as UAVs due to visibility limitations from top-view cameras. Robotic ground vehicles offer a promising solution, but the challenges imposed by variable working environments, tall-growing plants, and narrow row spacing need to be overcome first. A fleet of automated robotic ground vehicles (Phenotbot 3.0) were designed and built in this study for field-based phenotypic data acquisition of corn/sorghum plants. Each robot is equipped with a self-balancing sensor mast carrying our customized stereo camera heads with enhanced illumination to image plants at a close distance with superior image quality and consistency. Various morphological traits such as brace roots, stalks, ears, leaf angles, and tassels or panicles of plants at different growth stages can be extracted from images taken at different heights with our developed image processing pipelines. A narrow robot body with a central articulated steering mechanism and a differential gear power transmission enable the robot to swiftly traverse between narrow crop rows with high energy efficiency. A navigation system enhanced by sensor fusion, computer vision, and reinforcement learning was developed for autonomous navigation in field and obstacle avoidance between crop rows. The navigation and data acquisition performance of the proposed phenotyping robot was evaluated both in a simulated environment in Gazebo and via field tests.