General Information

Name Akshay L Chandra
Email research [at] domain (or) email [at] domain


  • 2022
    How Useful Is Image-Based Active Learning for Plant Organ Segmentation?
    Plant Phenomics, Feb. 2022
    • Authors: Shivangana Rawat, Akshay L Chandra, Sai Vikas Desai, Vineeth N Balasubramanian, Seishi Ninomiya, and Wei Guo
  • 2021
    On Initial Pools for Deep Active Learning
    NeurIPS 2020 Workshop on Pre-registration in Machine Learning, Dec. 2021
    • Authors: Akshay L Chandra*, Sai Vikas Desai*, Chaitanya Devaguptapu*, and Vineeth N. Balasubramanian
  • 2020
    Active Learning with Point Supervision for Cost-Effective Panicle Detection in Cereal Crops
    Plant Methods (BioMed Central), Mar. 2020
    • Authors: Akshay L Chandra*, Sai Vikas Desai*, Vineeth N Balasubramanian, Seishi Ninomiya, and Wei. Guo
  • 2020
    EasyRFP: An Easy to Use Edge Computing Toolkit for Real-Time Field Phenotyping
    Extended Abstract at CVPPP & ECCV Academic Demonstrations, Aug. 2020
    • Authors: Akshay L Chandra*, Sai Vikas Desai*, Hirafuji Masayuki, Seishi Ninomiya, Vineeth N Balasubramanian, and Wei. Guo
  • 2019
    An Adaptive Supervision Framework for Active Learning in Object Detection
    British Machine Vision Conference, Aug. 2019
    • Authors: Sai Vikas Desai*, Akshay L Chandra*, Wei Guo, Seishi Ninomiya, and Vineeth N Balasubramanian


  • 2021-Now
    Master of Science (Computer Science)
    University of Freiburg, Freiburg, Germany
  • 2017-2018
    Post-Graduate Diploma (Applied Statistics)
    Indira Gandhi National Open University, Delhi, India
  • 2013-2017
    Bachelor of Technology (Computer Science and Engineering)
    Jawaharlal Nehru Technological University, Hyderabad, India


  • 2022 - Now
    Student Research Assistant
    Robot Learning Lab, University of Freiburg, Germany
    • Working on robot learning and manipulation, supervised by Iman Nematollahi and Dr. Tim Welschehold at the Robot Learning Lab. Most of my days in the lab are spent trying to better define, learn, sequence, and refine robot "skills".
  • 2018-2021
    Research Assistant
    Indian Institute of Technology, Hyderabad, India
    • I spent almost 3 years assisting Prof. Dr Vineeth N Balasubramanian's research at Lab1055. I also collaborated actively with Prof. Dr Wei Guo from the International Field Phenomics Research Laboratory to work on Plant Phenomic problems.
    • During my time at IITH, I was fortunate enough to publish four peer-reviewed research articles in the area of “Machine Learning with Limited Labeled Data” with a few more still to come. I feel extremely lucky to have had an opportunity to learn from the brilliant members of the lab on how cutting edge computer science research is done.
  • 2017-2018
    Associate Software Engineer
    GGK Technologies, Hyderabad, India.
    • While in training, I spent most of the days learning, understanding, implementing and pondering over various concepts of Linear Algebra, Calculus (Optimization) and Machine Learning. Completed two POCs - "Reservation Cancellation Prediction" and "Fashion Clothes Recommendation System". Won Trainee-Of-The-Month award in July 2017 amongst 28 other trainees.
    • As a core member of the AI team, my responsibilities included helping clients in health care, retail, e-commerce industries optimize their existing processes by building useful prediction models, capturing customer/patient behaviour patterns, identifying correlation and causation, maintaining data quality for smooth visualization and modeling. Exclusively worked on building a Computer Vision application that detects product pickups in a retail store from just the CCTV footage.
    • Won Star-Of-The-Month award in August 2018 for successfully incorporating a custom Object Detection model in the above mentioned Computer Vision application.
  • 2016
    Software Engineer Intern
    Polycom Inc., Hyderabad, India
    • Worked on their Android platform. Enhanced existing code for the 'Jaguar' version update of their operating system. Completed a Speech Recognition POC in Android, with the help of Google's Speech API.