blog > transforming-meal-planning-with-ai-the-culinary-cognition-hackathon-success

Transforming Meal Planning with AI: The Culinary Cognition Hackathon Success

by Runninghill Software Development
Published on: 6/19/2024

Transforming the eternal question of ‘What will we eat?’ into a delightful discovery with CulinaryCognition—where your pantry meets our technology to create meals that inspire.

Challenges:

AI Model Training Challenges: Our journey began with the challenge of training AI models due to limited infrastructure. Initially, we leveraged Google Colab’s Nvidia T4 GPUs for their tensor cores, which accelerated the process. However, we soon encountered storage limitations on Colab’s available tiers, which hindered our ability to train the AI with our full dataset. Our innovative solution involved repurposing a gaming PC equipped with an RTX 3050 GPU. Although lacking tensor cores, which increased our model’s training time, this setup eventually allowed us to proceed after overcoming some initial hurdles with TensorFlow installation and configuration.

Containerizing TensorFlow.js: Integrating TensorFlow.js into our Express server presented another set of challenges, primarily due to the nascent state of the technology and specific access requirements. Certain dependencies needed root access for installation within our container environment. Despite these obstacles, we managed to successfully containerize our server, achieving seamless operation once the initial issues were resolved.

Image Recognition Accuracy: In the realm of image recognition, we faced accuracy limitations imposed by the compressed time frame for model training. With a more refined dataset and extended training period, we anticipate significant improvements in our model’s accuracy.

API Image Transmission: Implementing a method to transmit images via API between the frontend and backend proved complex. Initially, we did not anticipate this need. After some exploration, we opted to encode images as base64 strings for API transmission. We encountered a further challenge with size limits on the data our server could receive, but a configuration adjustment resolved this issue, allowing smooth data transfer.

Converting Python Models to JavaScript: Converting our TensorFlow model from Python to JavaScript in TensorFlow.js was fraught with difficulties due to our team’s limited experience with the framework. After consulting several tutorials and extensive discussions on Stack Overflow, we successfully completed the conversion, enabling the integration of our model into the web environment.

Implementation:

Our solution, CulinaryCognition, harnesses a robust stack of modern technologies to deliver a seamless and interactive user experience. The front-end of the web application is built using Angular, ensuring a responsive and intuitive interface. We integrated ngx-webcam within the Angular framework to access the user’s camera directly, enabling them to easily capture images of their food items for analysis.

On the backend, we employed Azure Functions coupled with NodeJS and Express to create a scalable web API. This setup efficiently coordinates communication between OpenAI and our TensorFlow image recognition model, which lies at the heart of our image analysis capabilities. TensorFlow’s powerful machine learning algorithms allow us to accurately identify ingredients from user-provided images, facilitating personalised meal recommendations.

For hosting and continuous integration/continuous deployment (CI/CD) pipelines, we utilised Azure DevOps. This not only provided us with high availability and scalability but also streamlined our development process, enabling rapid updates and maintenance.

Angular Front-end Development Process

Results:

Our team successfully developed an innovative Angular-based web application, seamlessly integrated with OpenAI’s cutting-edge technology, designed to revolutionise meal planning. By utilising our custom TensorFlow image recognition software, the platform identifies ingredients from user-uploaded images or directly from photos taken with the user’s device camera. This flexibility enhances user engagement by allowing real-time ingredient identification, making meal planning both interactive and convenient. Users can customise their meal suggestions according to specific dietary constraints, ensuring that the recommendations cater to individual health and dietary needs. Our AI-driven system offers a diverse array of meal choices, providing users with multiple culinary options to explore. Impressively, we achieved the milestone of launching this application “live” for external users within a mere 24 hours, demonstrating our team’s dedication and technical prowess in delivering complex projects under tight deadlines.

The application also caters for users without webcams or those who are limited in terms of mobile data, by providing the option for users to manually type in a list of ingredients, rather than capturing photos or uploading them.

WRITTEN BY

Runninghill Software Development