Advisory Board (AB) Meeting and
Final Year Project (FYP) Industry Day 2021

14 May 2021 (Friday)

The Department of Electronic and Computer Engineering of HKUST is a world-class department with over 40 faculty performing research in various electronic-related areas including Solid-State Electronics and Photonics, Integrated Circuits and Systems, Control and Robotics Systems, Data Science and AI, and Wireless Communications and Networking. The ECE Advisory Board serves as a bridge between the Department and its key industry affiliates to facilitate regular communications and interaction through annual events to elevate the relevance of our curriculum and research programs to the industry.

 

 

Rundown

Time Activity
9:30am – 12:00nn Advisory Board (AB) Meeting
1:30pm – 3:30pm Final Year Project (FYP) Demo Session
3:30pm – 4:00pm Award Ceremony for the Best FYP Demos

 

List of FYP Demonstrations

Demo 1 - Thermal Camera Temperature Measurement (and Facemask Alert) System Using Embedded Machine Learning (MWH03a-20)

This project aims to provide a fast-screening, embedded machine learning solution to perform efficient mass body temperature measurement and alert security guards when detected people are not wearing masks properly or have a fever. Since symptoms of COVID-19 have drawn public awareness to control infection spread by frequently measuring body temperature, it is well-established that people with fever and not wearing a mask are forbidden to enter any public areas. On that basis, our system leverages the computation power of the Coral USB Accelerator and shows the face detection and masks classification processed from machine learning models on a single-board computer, Raspberry Pi, and estimates body temperature according to a thermal camera, the FLIR ONE Pro, with input infrared radiation emitted from the forehead. Results indicates the mask classification at a training accuracy 99.57% and validation accuracy 93.54% and temperature measurement error within 0.5oC in real-time. 

 

Demo 2 - Smart Displays and Projectors with Embedded Visible Light Communication (VLC) (PY03a-20)

In view of the latest development of smart indoor services, a visible light positioning (VLP) algorithm is proposed in this project to provide the foundation for a smart display and projector system. The VLP algorithm mainly applies geometry methods based on homogeneous matrix to first calculate the angles of the camera and the user’s location using a single input image that contains a rectangular and circular luminary. Afterwards the calculation is extended to the images with a rolling shutter pattern that corresponds to visible light communication (VLC). The algorithm is implemented in Matlab and then converted to C++ for a broader range of applications. The test results show that the algorithm can estimate the camera angles and user’s location accurately and our work can be further expanded to indoor positioning for robotic vehicles and other relevant applications.

 

Demo 3 - Point-of-Load Cascading Buck Converter in Strong Magnetic Field (KW01b-20)

A magnetic field of 4T magnetic flux density is used to accelerate particles in the Large Hadron Collider. The measurement circuitries in the ATLAS detector are powered by 2.5VDC that is regulated by point-of-load voltage regulators. To compromise the extra power consumption of increasing the detector’s luminosity, the original linear regulators were proposed to be replaced by buck converters while the input voltage will be increased to 22VDC. It can reduce the power cable resistive loss.

The inductor core saturation can be prevented by incorporating air-core inductors, which have lower inductance, which results in using megahertz range switching frequency to maintain a low inductor current ripple. There will be a large switching loss using typical buck topology because of the low duty ratio. A cascading buck converter that has tripled duty ratio with two air-core inductors to prevent core saturation in a strong magnetic field was built in this project and its functionalities were verified.

 

Demo 4 - AWS Automatic Recognition and Control IoT System for Elderly (WKT01d-20)

With a continuing increase in the aging population of Hong Kong, there is a need for a gesture- and voice-controlled emergency notification system to improve upon the current button and/or pendant system, which requires the user to access or carry an emergency call device. In this project, we design an AWS Automatic Recognition and Control IoT System for the elderly to control their home devices and send an emergency SMS or Gmail with their hands or voice. The existing technologies contain many limitations including activating issues, and complicated control panel designs. Our design solves these limitations and tailors smart home devices with image and voice recognition systems. We mainly use Siri and Raspberry Pi cameras to detect the user’s instructions, the signal is transmitted from Raspberry Pi to AWS. Then AWS publishes the control signals to ESP32 to control Relay and IR products. We design the gesture and voice recognition system to be 70 percent accurate and to control individual appliances in less than five seconds. Users can use their hands or voice to control light bulbs, infrared fans, TV switches, etc.

 

Demo 5 - Hierarchy Decoding on Multi-cortical Activity for Brain Machine Interfaces (WY01a-20)

Brain-machine interfaces (BMIs) is a technology that translates neural activities in the brain which are relevant to movement into the commands to control other devices. Such technology could be used as a medical application to help people with motor disabilities to restore motor functions, while future applications are not limited to the medical area. Because of the highly cross-disciplinary, both the understanding of the brain's activities and engineering technologies are fundamental and critical for this research field. this project focused on the decoder part, which translates the neural activities in the brain and generates commands for other devices like prostheses. With the combination of both neuroscience and engineering, this project proposed a new model for the decoder and implemented it in a real application. The new decoder could improve the performance of the whole system, in the sense of both accuracy and reliability.

 

Demo 6 - CT Image Segmentation (YW03a-20)

Computed tomography (CT) is a medical imaging technique which allows doctors to interpret patients’ inner body condition efficiently. Given its significance, many CT scans are produced every day, causing a huge workload demand on radiologists. Computer-aided diagnosis (CAD) systems are thus developed to address these problems by segmenting the CT scans and extracting the region of interest automatically. In this paper, a CAD system utilizing the level set method with global solution as the segmentation model is proposed. The proposed system utilized different pre-processing techniques to remove CT artifacts, then applied classic region-based level set function, which is based on Chan-Vese algorithm, to handle global maxima on 3D volume. Segmentation results and abnormalities detection are visualized in a user interface. The performance results are able to segment the body parts accurately without segmentation error, where the project goals are achieved.

 

Demo 7 - VLC-Reckoning: An Indoor Navigation Application that Combines Visible Light Communication and Dead Reckoning (PY03b-20)

Nowadays, the most prevalent indoor localization system is RF-based positioning system. However, due to various factors such as constant variation of received signal strength (RSS), and different intensity for different smartphone models, these systems suffer from poor localization accuracy. Compared with RF-localization systems, visible light communication (VLC) can provide a highly accurate location to the users due to the line-of-sight (LOS) link between the smartphone’s camera and LED light source. However, there are also some downsides for VLC. For instance, it can be easily blocked and cannot cover the blind spots in sparsely illuminated area. Therefore, an auxiliary system - pedestrian dead reckoning (PDR) is required which uses smartphone’s built-in sensor to estimate the user’s location during the unavailability of VLC signal. In this project, we characterize the location accuracy of such VLC and PDR-based localization system and develop methods to mitigate various sources of errors and improve the accuracy and reliability of the system. Several experiments are conducted to verify the systems performance and a VLC-based check-in and check-out system is implemented as demonstration.

Demo 8 - Temperature Measurement Using Embedded ML (and Thermal Face Recognition) (MWH03b-20)

Since 2019, COVID-19 has spread among different countries and seriously affected our daily life. To identify people with fever, there are several body temperature measurement methods on the market. However, these methods have different drawbacks on their efficiency, price, or privacy issue. Our project aims to use low-cost and portable hardware to build a body temperature measurement system. Our system has achieved 89.6% face detection accuracy, error of ±1.0°C in temperature readings, and a valid distance of 2.5 meters. The main difference between our system and the existing solution is that our system only captures low-resolution thermal images, and preventing privacy issues. 

The system captures thermal images through a thermal camera and passes them to a Raspberry Pi, and implements machine learning with the assistance of a USB accelerator in order to detect the position of the human face in the image. Finally, the target's body temperature can be calculated.

 

Demo 9 - Smart Cane for the Visually Impaired (WKT01a-20)

To help the visually blind community tackle navigation difficulties in roads and foreign environments, our group devised an obstacle avoidance guiding Smart Cane. Simulating the guidance of a guide dog, we based our design on the traditional white cane, further attaching a self-designed omni wheel underneath. Upon incorporating dual obstacle avoidance with ultrasonic and computer vision, an obstacle-free path of surrounding landscape can be mapped out. Users, upon pairing our accompanying mobile app with the Smart Cane through Bluetooth, can kickstart their navigation journey by simply inputting their desired destination within the app. While our computer vision model is able to give 74% accuracy, after cross-referencing with ultrasonic sensor output, our real-world navigation test within HKUST is able to achieve an 80% accuracy.

Demo 10 - Lidar System Based on FLC Dammann Grating (AS01a-20)

Sampling rate and accuracy are key factors to determine the performance of a Lidar system. The target orientation is also an interest in this project. A Ferroelectric liquid crystal dammann grating (FLCDG) LiDAR sensing approach is used to achieve a minimum of 50 frames per second and to measure the target orientation is the objective. 

The FLCDG projects a 7x7 dot array with equal intensity.  The distance and orientation can be calculated by the dot distribution projected from the FLCDG. The dots tilting angle and dots position standard deviation used to calculate the distance and target orientation separately. An autonomous car will be programmed using standard deviation and tilting angle for decision making. 

The results show an effective sensing distance of 30cm which can be used for short range sensing with error less than 5%. The latency is 50ms on average, 25ms is due to network transmission. The computation time can be reduced by improving the algorithm and computer specification. The sampling rate can be improved by using higher sampling speed camera.

 

Judging Criteria

Maximum 10 scores for each criterion

 

  • Innovation
    Did the team seek innovative solutions to problems they have identified?
  • Clarity & Organization
    Are the project objectives and results clearly presented and explained?
  • Technology Challenge
    Is the work done in the project technically challenging?
  • Project Simulation/Demo
    Did the team perform well in their project simulation / demo?
  • Presentation Skills
    Is the project presented well with good English? 

 

Final Year Project Awards

Gold, Silver and Bronze Prizes will be awarded to the three teams with the highest scores from all the judges.  The judging panel is formed by Advisory Board members.  The scores given by the judging panel will not affect the students' actual FYP course grades.  This year FYP awards are supported by

​​​​​​​

FYP Demo Awards

Award Project Title Video

Gold

Smart Cane for the Visually Impaired (WKT01a-20)

Silver

Smart Displays and Projectors with Embedded Visible Light Communication (VLC) (PY03a-20)

Bronze

Point-of-Load Cascading Buck Converter in Strong Magnetic Field (KW01b-20)

 

Visit our Gallery for group shots and photos of award presentation.