Halim Noor is an academic in the School of Computer Sciences at Universiti Sains Malaysia (USM). Prior to this, I served at the Universiti Teknologi MARA Pulau Pinang as a Senior Lecturer in Computer Engineering.
My research is in the fields of machine learning and deep learning for computer vision and pervasive computing. Currently, I am focusing on problems in human motion analysis such as video and signal segmentation, representation (feature) learning, and classification.
Looking forward, I am interested in
- how to achieve an effective segmentation of input data
- unsupervised feature learning using deep learning i.e. to learn a more salient feature representation
- data augmentation/generation using deep generative models
My research has been published in Journal of Ambient Intelligence and Humanized Computing, Neural Computing and Applications, Knowledge-based Systems, Pervasive and Mobile Computing and in the proceeding of several conferences.
Feel free to contact me to discuss any related topic or to propose a research topic.
Address: 610, School of Computer Sciences, Universiti Sains Malaysia
|The hand-crafted (time and frequency-based) features that are extracted and selected are heuristic and rely on expert knowledge of the domain. The features may be effective in certain specific settings, but the same features might fail to discriminate the activities in a more general environment. Furthermore, hand-crafted feature extraction and selection are time-consuming, laborious and prone to error, and might still achieve suboptimal recognition performance. In this research, an unsupervised feature learning method for activity recognition is proposed. The proposed method eliminates the need for manual feature engineering, making it more accurate in learning the underlying features of the data. Furthermore, the proposed method maps the sensor data into a lower dimensionality feature space, consequently, reduces the computational cost and improves generalization. [pdf][read]|
|One of the important factors of high accuracy of deep learning is the sufficient amount of training data. A robust and reliable model needs a vast amount of data to precisely capture the underlying pattern of the data. However, data collection in activity recognition is expensive and time-consuming especially from the elderly people. In this research, we propose a unified deep generative model based on the conditional generative adversarial network (CGAN). The proposed model is able to generate verisimilar sensor data of different activities using a single model. The generated data allows for data augmentation in HAR classification problem to improve its recognition accuracy. [pdf][read]|
|A key factor in signal segmentation is to select the suitable window size for activity classification. Window size is important because it needs to capture necessary characteristics of a signal in order to achieve correct detection/classification. Short windows could slice an activity signal into multiple separate windows. Thus a truncated signal lacks the full information to describe the activity. On the other hand, larger window size could contain multiple activity signals which could also lead to misinterpretation of physical activities. The most effective window size depends on the type of signals being evaluated because different activities have different periods of motion. In this research, we proposed a novel signal segmentation approach which can adaptively change the initial fixed window size to deal with transitional activity signals of varying lengths. [pdf]|
|Despite the advantages of ontology-based technique, there are still limitations that must be tackled which is dealing with uncertainty. In this research, we proposed a novel reasoning algorithm by integrating OWL ontological reasoning mechanism with Dempster-Shafer theory of evidence to provide support for handling uncertainty in activity recognition. [pdf]|
|There are two major approaches for sensor-based activity recognition. The first approach makes use of dedicated wearable sensors and the second one makes use of sensors attached to objects that are a part of the environment. In this research, best of both human sensing approaches are harnessed to achieve a robust and comprehensive activity recognition. [pdf]|
*[IF: XXX], impact factor of the year published
 M. H. Chan, M. H. M. Noor, “A Unified Generative Model using Generative Adversarial Network for Activity Recognition”, Journal of Ambient Intelligence and Humanized Computing, 2020, Article in Press. [IF: 4.594] doi,pdf,read
 M. H. M. Noor, M. A. Ahmadon, M. K. Osman, “Activity Recognition using Deep Denoising Autoencoder,” 2019 9th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), 2019, pp. 188 – 192. doi,pdf
 N. A. M. Yusof, A. Ibrahim, M. H. M. Noor, N. M. Tahir, N. M. Yusof, N. Z. Abidin M. K. Osman, “Deep Convolution Neural Network for Crack Detection on Asphalt Pavement,” Journal of Physics: Conference Series, Vol. 1349. No. 1. IOP Publishing, 2019. doi,pdf
 N. A. M. Yusof, M. K. Osman, Z. Hussain, M. H. M. Noor, A. Ibrahim, N. M. Tahir, N. Z. Abidin, “Automated Asphalt Pavement Crack Detection and Classification using Deep Convolution Neural Network,” 2019 9th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), 2019, pp. 215 – 220. doi
 N. A. M. Yusof ; M. K. Osman ; M. H. M. Noor ; A. Ibrahim ; N. M. Tahir ; N. M. Yusof , “Crack Detection and Classification in Asphalt Pavement Images using Deep Convolution Neural Network,” 2018 8th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), 2018, pp. 227 – 232. doi
 M. H. M. Noor, Z. Salcic, and K. I.-K. Wang, “Ontology-based Sensor Fusion Activity Recognition”, Journal of Ambient Intelligence and Humanized Computing, vol. 11, Issue 8, pp. 3073–3087, Jan. 2018. [IF: 2.505] doi,pdf
 M. H. M. Noor, Z. Salcic, and K. I.-K. Wang, “Adaptive Sliding Window Segmentation for Physical Activity Recognition using a Single Tri-axial Accelerometer”, Pervasive and Mobile Computing, vol. 38, Part 1, pp. 41–59, Jul. 2017. [IF: 2.349] doi,pdf
 M. H. M. Noor, Z. Salcic, and K. I.-K. Wang, “Enhancing Ontological Reasoning with Uncertainty Handling for Activity Recognition”, Knowledge-Based Systems, vol. 114, pp. 47–60, Dec. 2016. [IF: 4.529] doi,pdf
 M. H. M. Noor, Z. Salcic, and K. I.-K. Wang, “Dynamic Sliding Window Method for Physical Activity Recognition using a Single Tri-axial Accelerometer”, in 2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA), 2015, pp. 102–107. doi
 M. H. M. Noor, Z. Hussain, K. A. Ahmad, and A. R. Ainihayati, “Gel Electrophoresis Image Segmentation with Otsu Method based on Particle Swarm Optimization”, in 2011 IEEE 7th International Colloquium on Signal Processing and its Applications (CSPA), 2011, pp. 426–429. doi
 M. H. M. Noor, A. R. Ahmad, Z. Hussain, K. A. Ahmad, and A. R. Ainihayati, “Multilevel Thresholding of Gel Electrophoresis Images using Firefly Algorithm”, in 2011 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), 2011, pp. 18–21. doi
1. Nor Aizam Muhamed Yusof, Pavement Distress Analysis using Deep Learning, 2017-Present, Co-supervisor
2. Noratikah Nordin, Prediction by Machine Learning of Suicide Attempts among Adolescents in Malaysia, 2018-Present, Co-supervisor
3. Haruna Abdu, Wearable Sensor-based Activity Recognition. 2019-Present, Co-supervisor
4. Raid S. A. Basheer, Multiscale Brain Modeling and Analysis, 2019-Present, Main Supervisor
5. Ali Olow Jimale, Deep Learning Approach for Activity Recognition, 2020-Present, Main Supervisor
6. Bello Ibrahim Kangiwa, Engaging the At-risk Online Learners Using Learning Analytic, 2020-Present, Main Supervisor
7. Abdulrahman M A Baraka, Weakly-Supervised Temporal Action Localization, 2020-Present, Main Supervisor
8. Fathe Said Emhemed Shaninah, Defining the Best Personalized Learning Method using Machine Learning, 2020-Present, Main Supervisor
9. Ige Ayokunle Olalekan, Activity Recognition using Hybrid Unsupervised Deep Learning Techniques in Healthcare, 2020-Present, Main Supervisor
10. Hadeel Sameer Mohd Al Tahainah, Developing and Analyzing Artificial Intelligence-Based Algorithms for Obtaining Super Resolution Satellite Images, 2020-Present, Main Supervisor
11. Al Tabrawee Hussein Allawi Hasan, Medical Assistant System Based On Machine Learning Using Heterogynous Medical Data Sources, 2020-Present, Main Supervisor
12. Alqablan Tamara Amjad Abdelkarim, A Hybrid Model of Hopfield Neural Network and Ant Colony Optimization Algorithm For Effective Network Intrusion Detection System, 2020-Present, Main Supervisor
13. Sani Tijjani, Semi-Supervised Learning for Human Activity Recognition, 2020-Present, Main Supervisor
1. Chan Mang Hong, Data Generation using Generative Adversarial Network for Human Activity Recognition, 2019, Main Supervisor
2. Jodene Ooi Yen Ling, Predicting Freezing of Gait in Parkinson’s Disease with Autoencoder-based Representation Learning, 2019, Main Supervisor
3. Loh Jing Zhi, MobileNet-SVM: A Hybrid, Light-weight Deep Learning Architecture for Human Activity Recognition, 2019, Main Supervisor
4. Yap Kah Liong, Signal Segmentation using You Only Look Once Network for Human Activity Recognition, 2019, Main Supervisor
5. Lim Chin Tiong, Comparative Study of Deep Learning-based Object Detection Algorithms on Real-time Embedded System, 2019, Main Supervisor
6. Tan Sen Yan, Hybrid Deep Learning Architecture for Activity Recognition, 2020, Main Supervisor
7. Liau Wei Jie Brigitte, Semiconductor OCR Using Deep Learning, 2021, Main Supervisor
1. Real-time Activity Recognition using Wearable Inertial Sensors, Short-term Research Grant, USM, RM34,488.40 – Principal Investigator.
2. Shaping Pro-Environment Behaviours: Awareness Apps, Long-term Research Grant Scheme, Ministry of Education, RM186,400 – Principal Investigator.
3. Dimensionality Reduction for Wearable Health Devices, Fundamental Research Grant Scheme, Ministry of Education, RM74,700.00 – Principal Investigator.
4. Image Data Analytics for Industry 4.0, CREST, RM206,500.00 – Co-Investigator.
Japan Advanced Institute of Science and Technology: 27/10/2018 – 10/11/2018