Fatigue-related impairment is a major contributing factor to road accidents; however, detecting early visual indicators of driver tiredness remains challenging under realistic driving conditions. This study introduces an artificial intelligence–based system for distinguishing between alert and fatigued drivers using facial images captured in natural driving environments. A total of 41,793 annotated facial images from the Driver Drowsiness Dataset (DDD) were used in the experiments. Although the dataset reflects realistic driving scenarios captured by dashboard-mounted cameras, the proposed system was evaluated offline and not tested in live traffic environments. Deep visual features were extracted using the SqueezeNet architecture and subsequently classified using three supervised learning models: Artificial Neural Networks (ANN), Random Forests (RF), and Support Vector Machines (SVM). Among the evaluated classifiers, ANN achieved the highest performance with an accuracy of 99.97%, followed by RF with 99.78% and SVM with 96.33%. The results indicate that combining lightweight deep feature extraction with classical machine learning classifiers can yield highly accurate fatigue detection while maintaining computational efficiency. The proposed framework provides valuable insights into the development of efficient, real-time driver fatigue monitoring systems with potential applications in accident prevention and road safety enhancement.