The making of : My portfolio website
Hi Medium! 👋 This is my first blog post ever and it is the first part of a series I have decided to name “The making of” (cliché, I know) where I document the thought process, development and journey of various projects I’m working on. Most of the posts will revolve around software engineering, but I have ideas for some awesome hardware projects too! I hope you enjoy reading this, and who knows, maybe you’ll be inspired to make something similar.
The concept
As a web developer, one of the best ways to show your skills is with your portfolio website. My previous iterations had served their purpose, but they had been boring and static. I knew I wanted my site to be dynamic and unique, so I started coming up with some ideas.
I quickly decided that an animated background would be the best way to retain a good user experience whilst simultaneously adding unique creative flair to the site.
Initially, I thought a weather based background could be quite interesting, and, it could be dynamic and unique for each individual user (something as seen above). However, as I intended to collect location data for the custom heatmap, there were significant privacy considerations involved in collecting the geolocation of visitors. There is substantial documentation regarding geolocation access on the web: MDN Docs, but nonetheless, I thought the inevitable browser permission request would be a deterrent for many visitors and an unnecessary invasion of privacy, so I very quickly decided that unique user data could not to generate the background.
Whilst brainstorming, I thought about any unique, random data that was generated by myself and that could be piped into the site. My music listening habits appealed to me as a possibility, as music lends itself well to artistic interpretation. Spotify also has a terrific API for fetching and interpreting listening activity, so it seemed like a good option for my portfolio site and I decided to use the idea and see what happened!
The process
I wanted this site to showcase my talent for web development, so I decided on a flexible, fast and familiar technology stack.
- NodeJS
- Express
- React (NextJS)
(No database was needed, as it was feasible for data to simply be cached using Redis, instead of using a full database architecture)
This entire stack implemented TypeScript, which is an incredibly useful (although occasionally annoying) tool that adds type checking to JavaScript, which does many super awesome things, including ensuring that I write far more resilient error checking code. If there is one thing you learn this year, TypeScript should be it.
I quickly got to work, whipping up a simple frontend for development purposes, and began to setup the Spotify integration. The code would essentially consist of two components:
- The code that fetches the song I’m currently listening to.
- The code that generates a background image based on said song.
Step 1: Fetching the song
This function takes a Spotify authentication token as a parameter (this is either fetched from Redis or requested from Spotify) and returns the song that I am currently listening to.
As well as the song providing the parameters to generate a gradient, I also used it to provide a nice little card on the site with the album cover, song name and artist.
Step 2: Generating the gradient background
Okay, this might look quite complicated, but don’t worry —I’ll go through it line by line.
const audioFeatures = await getAudioFeatures(token, id);
This line fetches an audio analysis object of the track from Spotify. The API endpoint is documented here and it essentially provides a section-by-section breakdown of a song that includes information like the key, tempo, loudness, duration of each section of the song.
const gradientArray = audioFeatures.sections.map((section) => {...})
This line allows me to manipulate each section of a song to create a colour based upon it, and return that colour into an array that will be sent to the frontend of the site.
These lines generate a hue, saturation and lightness value for each section of a song, and consistently deliver varied and attractive colours.
Although there is a lot of math involved, it is the culmination of much trial and error in order to generate “good” colours: in the initially iterations, it was spitting out only black and white.
Essentially, hue is mainly determined by tempo, saturation by volume and lightness by key.
Nearly done! This line works out what percentage of the song is taken up by the current section, and adds it to the percentage progress of the song to work out what position the colour should be in the gradient.
This entire process generates an array with each item containing a colour and a position, used to create a gradient unique to each song.
This data is sent to the frontend along with the song details to create a dynamic background that is attractive and interesting.
The frontend fetches this information periodically using an amazing little React hook called SWR, ensuring that the currently displayed song is up to date.
I also apply a significant blur and tilt to get the perfect appearance.
Some examples
Conclusion
After getting the background to work reliably, I added all the standard, boring-ish features of a portfolio site (not interesting enough to talk about here), adding simple, but attractive elements to display my work and show off my skills.
My site can be viewed here:
The source code for the site backend and frontend can be found here:
Thanks for reading! ❤️