star
09 Jun 2025
10 mins

AI in Education – Reimagining Teaching, Learning, and Leadership 

Written by: Nikhila Suresh

Listen to this blog

play
0:00 / 6:00

Insights from a Distinguished Panel on the Future of AI-Driven Education 

In a gripping discussion panel led by Dr. Manojkumar Nagasampige, Director – Directorate of Online Education at PANORAMA 2025, various experts from different fields of technology, educational institutions, and the healthcare sector talked about the ways in which artificial intelligence (AI) transforms the education sector. The session commenced with a short rhyme by Dr. Manojkumar:  

“AI AI shining bright, 

 learning fast thinking right,  

helping humans every day 

 in a smart and quiet way.”  

This verse really sums up the core of AI, no longer just a topic in sci-fi or a buzzword in technology, but a reality that is affecting us in a very positive way. Learning and changing the ways of teaching, learning, assessing, and developing has indeed been happening with AI.

The distinguished panelists included: 

  • Dr. Rohini Srivathsa, Chief Technology Officer – Microsoft India and SA 
  • Lt. Gen. (Dr.) M D Venkatesh, Vice Chancellor – Manipal Academy of Higher Education 
  • Prof. Sudarshan N S Acharya, Assistant Professor – Manipal School of Information Sciences 
  • Dr. Ganesh Paramasivam, Associate Professor – Cardiology at Kasturba Medical College, and Coordinator for Centre for excellence in AI at Kasturba Medical College, MAHE  
  • Mr. Ambrish Sinha, CEO – UNext Learning Pvt. Ltd. 

How is AI reshaping the infrastructure and digital ecosystems that support modern education and EdTech platforms? 

— Dr. Rohini Srivathsa, CTO, Microsoft India and SA  

Rohini

Dr. Rohini began by reminding everyone that AI is not a new thing. “It has been there in AlphaGo, in solutions for protein folding and in chess games,” she said. “However, the difference is that AI is now communicating, listening, thinking, and perceiving.” She also had the opportunity to unfold the importance of the period since the end of 2022 as a pivotal moment for launching AI into the centre stage. To her mind, three main factors explain AI’s tremendous working power in educational systems: 

  • Conversational Interfaces: AI tools like ChatGPT enable users to carry on uninterrupted and contextual conversations with them, thus making the learning process more natural. 
  • Reasoning Capabilities: In contrast to machines from earlier periods that just gave answers to the tasks that they were set, AI of today can analyse, explain, and change itself, which consequently impacts the way that learners engage with tech. 
  • Multimodal Understanding: New AI also has the ability to process all three—visuals, text, and speech—simultaneously. This not only helps the AI gain a deep understanding of educational content but also enables it to provide guidance through interactive setups and support the user’s continuous learning journey across different formats. 

In combination, these innovations are rewriting the digital architecture in education, from new-age classrooms and intelligent learning ecosystems to AI-powered administrative infrastructures. 

What are the biggest opportunities and risks you’ve encountered while developing or deploying AI-enabled tools for learners? 

— Mr. Ambrish Sinha, CEO, UNext Learning Pvt. Ltd. 

Mr. Ambrish Sinha emphasized that AI is energizing EdTech in a big way. “The transition has been incredible, definitely from data analytics to GenAI,” he mentioned. Most significant? Customized learning throughout all educational levels and even at competitive test prep. 

  • Continuous Learning: AI within higher education will certainly improve people’s capacity to “unlearn, relearn or continuous learning.” 
  • Productivity Abundance: AI can be compared to a “great productivity hack,” which means it can accomplish the same tasks in a shorter period of time. Most people are already using AI tools like ChatGPT or Perplexity. 
  • Enhanced Engagement: AI will have a major impact on the level of students’ and learners’ involvement, especially with the online ones, with Lumen Learn and Lumen Quiz tools providing faster-paced learning. 
  • Reduced Learning Anxiety: AI has the potential to become the de-stressor which cannot be ignored by the learners if it acts like the adaptive tests (e.g., GMAT) that always give the current progress status during the learning process. AI’s contextual understanding will “accentuate your learning.” 

Nevertheless, there are some risks that come along with these: 

  • Overdependence on AI: Students may end up giving their critical thinking tasks to AI. 
  • Job Displacement: Certain educational roles might disappear as AI systems take over the implementation of teaching or support functions.  

He wrapped up that the most important factor is the collaboration of humans and AI—taking advantage of technology to support the educators and the workforce, not to replace them. 

What cultural or mindset shifts do you believe are essential among faculty and administrators to integrate AI into higher education effectively? 

— Lt. Gen. (Dr.) M D Venkatesh, Vice Chancellor, Manipal Academy of Higher Education 

Lt. Gen. (Dr.) M D Venkatesh gave an administrator’s perspective. “It is not the technology that disrupts – it is the mindset,” he said. The major issue, however, is convincing the teachers of different generations that AI is not a danger, but rather a tool for them to use. 

He pointed out some of the changes in the mindset: 

  • Accepting of Change: The main reason for the resistance of senior faculty is their unfamiliarity with the new technology or the fear that they will be replaced by it. 
  • Empowerment Through Training: The administrators have to be the ones who invest in the building of capacity if they want the AI tools to be accessible and easy to use. 
  • Redefining Roles: Faculty must understand that AI is not there to compete with them but rather to help them get rid of routine work and concentrate on higher-order mentoring. 

Lt. Gen. (Dr.) M D Venkatesh concluded that education is no longer just about “knowledge,” but encompasses “knowledge, skills, attitude, application and innovation.” 

How can AI enhance hands-on, skill-based training in technical and vocational education—and what limitations should we be aware of? 

— Prof. Sudarshan N S Acharya, Assistant Professor, Manipal School of Information Sciences 

Prof. Sudarshan Acharya added a philosophical aspect to the conversation. “Consider AI as enhanced intelligence,” he stated. He shared an incident where he gave his students a real ICU dataset and asked them to find any anomalies using any tool—AI or not. 

The outcome? Perfectly logical, but lacking feelings. “Students didn’t consult with doctors—they went ahead with AI only. That is a problem. AI ought to energize us to work more with humans, not less,” he remembered. He is now trying out adaptive assessment models that are like students in that they learn and change—through reinforcement. 

This pointed out a few significant restrictions of this idea: 

  • Diminishing role of the critical thinking process: AI resources can give you a well-written paper, but they might lead you to less involvement with the subject and less asking questions. 
  • Over-reliance on technology: Students might opt for a shortcut, forgetting the scientific method or the experts’ input, they will always think that the AI is right. 
  • The problem with the evaluation: AI has to improve in order to sufficiently respond to the changes in the learner’s skills during the process of adaptive learning. 

Professor suggested “augmented intelligence” as a way along which the human mind and AI tools cooperate. 

How is AI being used in healthcare education, especially in simulations, diagnostics training, or clinical decision support? 

— Dr. Ganesh Paramasivam, Associate Professor – Cardiology, Kasturba Medical College 

Dr. Ganesh talked about the healthcare sector and used a metaphor to make the situation more understandable. “Traditional simulations are cold and impersonal,” he said. “With AI, it is possible to get virtual patients who not only talk, but also panic, scream and use regional languages.” It’s a game-changer for medical students, especially in rural areas with limited resources.” 

However, the ability of these AI tools to cover a large scale is accompanied by new problems—particularly in the ways students’ evaluate and the provision of equitable access. “It is obvious that not every medical college is able to provide AI simulators to students. So, what are the ways to guarantee fairness?” he asked. 

Dr. Ganesh also mentioned about cybersecurity gaps, particularly in health-related areas. “The human spot is still the weakest,” he pointed out while giving examples of data leaks that happened because of employees’ bad passwords or internal sabotage. 

The most essential applications he has pinpointed are: 

  • One of the AI Simulations principles is giving virtual patients with emotional and linguistic variability. Therefore, the training experience becomes more realistic, higher in quality and more intuitive than it used to be with static mannequins. 
  • With regard to AI Tools, we can suggest that providing the capability of analyzing imaging data (e.g., X-rays, MRIs) and describing this data greatly improves students’ learning outcomes. 
  • Clinical Decision Support: Moreover, AI-empowered systems can support surgeons and counsellors by providing diagnosis suggestions and implementing the most suitable treatment, despite the human voice still being the deciding factor. 

However, some problems still exist: 

  • Since High-end AI simulators are costly, their access may be limited to affluent urban regions. This also limits the usage of such simulators in parts where the budget allocation is insufficient. 
  • Regulation and Data Privacy: Healthcare data is very sensitive. AI systems need to be given more power regarding governance and transparency for data privacy issues. 
  • Trustworthiness: Most AI tools’ inner workings are still considered “black boxes,” so it is difficult for learners or doctors to understand the logic behind the suggestions to decide which one to trust. 

Dr. Ganesh advocated for transparent AI and equitable access, emphasizing that the future of education should not be limited to the elite. 

Challenges: Equity, Trust, and the Future of Work 

The panel was very clear that the major challenges of equity, trust, and responsible AI are still there. Dr. Ganesh stated that although AI allows scalability, the high-cost simulators and tools can result in the accessibility gap getting bigger in education and healthcare. Lt. Gen. (Dr.) M D Venkatesh agreed that there is a need for equity to be co-created and stated that technology has to be inclusive and the benefits should be provided to all. 

With regard to the trustworthiness of AI, Dr. Rohini said that there is no automatic fair, transparent, and safe situation. Instead, these things have to be done consciously. Prof. Acharya pointed out that if there is no human supervision, agentic AI models may be unable to cope with the reality of the world, especially with the difficulties that they face. 

Data privacy and cybersecurity have come to the fore as major concerns. Prof. Acharya is of the opinion that AI models that are localized will be able to keep the control of the data, whereas Dr. Ganesh emphasizes that humans are still the weakest factor in the chain of cyber-attacks. He thus stresses that there need to be clear regulatory frameworks, particularly in highly sensitive areas like healthcare. 

Mr. Ambrish Sinha stated that AI will eliminate certain roles while creating new ones. completely different from the present ones, as is the case with computers replacing typists. Lt. Gen. (Dr.) M D Venkatesh ended on a happy note, “The Human mind will be the one deciding how far AI goes. Brain will always be in the lead role.” 

Explore our online programs to become future-ready

View All Courses
  • ai
  • Artificial Intelligence
Chat Whatsup