About
Hi! This is my website.
I made this website (with the help of my friends Claude, GPT-4 and Cursor) to display some of the film pictures I've taken, but also as a little exploration into Machine Learning and ‘taste’.
I wanted to see if there was a way to use ML models to gain an insight into what kinds of pictures I took, and the attributes of the pictures that I liked the most.
Right now all of the pictures are completely uncropped and unedited, I may go over them later and reupload edited versions but I mostly just wanted to get the website up and running.
From the images, I tried to extract:
- The objects within the photograph (cars, buildings, animals etc)
- The scene of the photograph (urban, nature, water, indoor etc)
- The general colour profile (the dominant colours and the average colour)
To see if there were trends that emerged in the things that I chose to take pictures of, and this website aims to let you also see if there are recurring themes in pictures that you may like.
Features on the website:
- Images by themes: 11 themes that a ML model has classified the pictures into.
- Images by scene (under the things_ header): Classifications of pictures more specific than the ‘themes’ tab.
- Images by objects (also under the things_ header): Find images with objects that have been classified by an object detection model.
- Finding pictures with similar colours (on the image page): Using the histogram data of a picture, find other pictures with similar histogram data.
- Finding pictures similar to a desired colour (the colours_ tab): You can choose a colour and see if there's any pictures similar to the colour.
The methodology
Python
I ran the Yolo V11 model over all of the pictures I had taken to detect objects like vehicles, people or trees, I didn't set a limit on the confidence for object or scene detections and just included every single classification.
I then used Places365 to determine the Scene/Environment of the picture (ie a street, building or outdoor landscape). The model, as you may imagine, has 365 potential outputs so I used chatGPT to group the 365 possible outcomes into 10 or so more generalised groups.
Next, I extracted the 5 main colours from the image, (using K-means clustering with 5 clusters) and created a weighted average of these to create a ‘dominant average colour’, which you can use to find pictures with similar colours.
Frontend
I then used various LLMs to help me create the frontend.
Note:
A fair few of the object detections and scene categorisations are ‘wrong’, but I kept them in as I think even if the actual label is wrong, there is likely something about the information in those pictures that is similar to all other pictures in that label. Even if something wasn't actually a ‘bird’ or a ‘clock’, it probably looks somewhat similar to the item its been labelled as, so it would probably fit in with the other pictures of birds or cows. This is all a very simple model of similarity and recommendation but I think there is probably some merit in it still.