DECEMBER 29 — In recent months, conversations about artificial intelligence (AI) in education have become increasingly urgent. In classrooms, at home, and even on social media, students are turning to AI tools to complete homework, solve problems, and generate explanations — often with impressive speed. Yet behind this convenience lies a growing concern: are students truly learning, or simply retrieving answers?
As an educator, I have witnessed this tension firsthand. Students are able to complete tasks quickly with the help of AI tools, but when similar problems are presented in a different form, many struggle to explain their reasoning or apply the same concepts independently. The issue is not a lack of ability, but an over-reliance on AI as a shortcut rather than a learning aid. Without proper guidance, AI risks weakening critical thinking, deep understanding, and academic integrity.
This is why ethics matter in AI-supported learning. AI is neither inherently good nor bad; its impact depends on how it is used. When students are not taught to question AI outputs, verify accuracy, or reflect on underlying concepts, technology can quietly replace thinking instead of strengthening it.
Yet banning AI is neither realistic nor productive. AI is already embedded in students’ daily lives — from navigation apps and recommendation systems to automated content generators. The real challenge is to educate students on how to use AI responsibly, critically and meaningfully, particularly within science, technology, engineering and mathematics (STEM) education, where reasoning and problem-solving are essential.
And this pressing reality led to the development of AiSTEMeX. AiSTEMeX — short for Artificial Intelligence, STEM and Exploration — is an approach that integrates AI into STEM learning through exploration-based activities. Instead of treating AI as a shortcut to answers, students are guided to question how AI works, where it helps, and where its limitations lie. Learning becomes active, collaborative and reflective, rather than passive and answer-driven.
Recently, this approach was put into practice at the Centre for Foundation Studies in Science, Universiti Malaya (PASUM), through a STEM explorace programme conducted in collaboration with Maxis. Held on 15 and 16 November 2025, the programme involved 100 Form Four students from ten secondary schools around Kuala Lumpur, many from B40 backgrounds.
Rather than listening to lectures, students rotated through a series of interactive learning stations designed around exploration and teamwork. In one activity, students identified AI technologies already present in their everyday lives and discussed both their benefits and risks, including issues such as privacy and misinformation. This exercise encouraged students to see AI not as invisible infrastructure, but as a system that deserves scrutiny.
In another station, students used guided AI tools to complete simple creative tasks, such as drafting posters or campaign messages. Facilitators emphasised ethical use — checking accuracy, avoiding plagiarism, and acknowledging AI assistance. For many students, this was their first exposure to the idea that responsible AI use is a skill that must be learned.
Students were also challenged to design simple AI-based ideas to address real-world problems in areas such as education, health, safety and the environment. These activities highlighted that innovation is not only about technical sophistication, but about empathy, responsibility and social impact.
To ensure strong links with the school curriculum, the programme included subject-based stations in biology, physics and mathematics. Students explored biodiversity using AI-supported identification tools, experimented with physics simulations to understand cause-and-effect relationships, and solved mathematical problems manually before comparing their reasoning with AI-generated solutions. This comparison reinforced an important lesson: AI can support understanding, but it cannot replace thinking.
The programme concluded with a reflection session, where students shared what surprised them most about AI and what responsible use meant to them. Many expressed greater confidence — not in letting AI think for them, but in thinking alongside AI.
Feedback from participants was overwhelmingly positive. Students reported higher engagement, clearer understanding of STEM concepts, and greater awareness of ethical AI use. Facilitators observed improved teamwork, curiosity and confidence — outcomes that are difficult to measure through examinations alone, yet critical for long-term learning.
AiSTEMeX reminds us that strengthening STEM education is a long-term effort. It requires continuous innovation, ethical awareness and collaboration across all levels of education — from early schooling to tertiary institutions — in line with Malaysia’s national STEM goals.
Preparing future-ready learners is not only about what students know, but about how they think, question and make decisions in an AI-powered world.
As AI continues to shape education and society, initiatives like AiSTEMeX demonstrate that the most effective response is not resistance, but responsible integration — one that places ethics, understanding and exploration at the heart of learning.
*Amirul Mohamad Khairi bin Mannan is a Foundation Lecturer (Mathematics) at the Centre for Foundation Studies in Science, Universiti Malaya and may be reached at amirulkhairi@um.edu.my.
** This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.
You May Also Like