Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

PhD Update 13: A half complete

*...almost! In the last post, I talked about the AAAI-22 doctoral consortium, the sentiment analysis models I've implemented, and finally LDA topic analysis. Before we continue to what I've been doing since then, here's a list of all the posts in this series so far:

As always, you can follow all my PhD-related blog posts in the PhD tag on my blog.

Since the last post, I've participated in both the AAAI-22 Doctoral Consortium and a Hackathon in AI for Sustainability! I've written separate posts about these topics to avoid cluttering this post, so if you're interested I can recommending checking those posts out:

CLIP works.... kinda

In the last post, I mentioned I was implementing a sentiment analysis model based on CLIP. I've been doing this in PyTorch as the pretrained CLIP model is also implemented in PyTorch. This has caused a number of issues, since it requires a GPU with a CUDA compute capability index of 3.7+, which excludes a number of the GPUs I currently have access to, making things rather awkward. Thankfully, a few months ago I built a GPU server which have somehow forgotten to blog about, so I have been able to use this for the majority of the CLIP experiments I've been running.

Anyway, this process is now complete, so I can share a graph or two on the training progress:

These graphs are as always provisional and not final results, so please don't take them such. The graph on the left is the training accuracy, and the graph on the right is the validation accuracy. I used the ViT-B/32 variant of CLIP, with 512 units for 2 x dense layers after it before the final softmax dense layer that made the prediction (full model summary available upon request - please ensure you send requests by email from an official email account I can verify). What's astonishing here is CLIP's ability to 'zero-shot' - the ability to make a prediction in a target domain it hasn't seen yet with no additional training or fine tuning. It's one thing seeing it in a blog post, but quite another seeing it in person on your own dataset.

The reasoning for multiple lines here on each graph takes some explanation. Because the CLIP model is trained on tweets both with an image and an emoji, the number of tweets in my ~700K+ dataset of tweets that satisfy both of these requirements is only ~14K. With this in mind, I implemented a system to augment tweets that had a supported emoji but didn't have an image the image that CLIP thought best matched it. It was done with the following algorithm:

  1. Rank each image against the tweet in question
  2. Pick a random image from those CLIP has at least 75% confidence in
  3. If it doesn't have at least 75% confidence in any image, pick the next best image

The reason for this somewhat convoluted algorithm is to avoid a situation where CLIP picks the same image for every tweet. With this in place I increased the size of the dataset up to a peak of ~55K (it should be higher still, but I have yet to find the bug even after combing through all related code multiple times), I could then train multiple CLIP models each with a different threshold as to how confident CLIP had to be in the augmented dataset - this is what's shown on in the above graphs.

From the graphs above, I can tell that interestingly any image is better than none at all - at least in terms of training accuracy. With a peak validation accuracy of 86.48% (vs 84.61% without dataset augmentation), this outstrips the transformer encoder I trained earlier by a fair margin.

It's cool to compare the validation accuracy, but what would be really fascinating (and also more objective) would be to compare this to human-labelled tweets as a ground truth. While I'm unsure if I can publish the exact results and details of this experiment at this time, I can say that the results were very surprising: the transformer encoder narrowly beat CLIP in accuracy when comparing them against the ~2K human-labelled tweets!

The effect of this is that the images may not contain much information that's useful when predicting the positive/negative sentiment, so attempts to extract information from the images likely need to use a different strategy. I speculate here that the reason it appeared to boost the validation accuracy of CLIP is that it assisted CLIP in figuring out what actually being asked of it - similar to the "prompt engineering" the authors of CLIP mention in their section on CLIP's limitations.

Wrapping this half up

To wrap the social media half of my project up (for now at least), I'm writing a journal article to summarise the (sub)project. This will also include data and experiments from some of the students who participated in the Hackathon in AI for Sustainability 2022. I doubt that the journal I ultimately end up submitting to would like it very much if I release too many more details about this at this time, so a deeper discussion on the results, the journal I've chosen with my PhD supervisor's help to submit to, and the paper will have to wait until I finish it and it (hopefully!) gets accepted and published.

It's been slow-going on writing this journal article - both because it's my first one and because I'm drawing content together from many different sources, but I think I'm getting there.

Once I've finished writing this journal article, I believe I'll be turning my attention to the rainfall radar half of my project while I wait for a decision on whether it'll be published or not - so you can expect more on this in the next post in this series.

The plan

Going on a bit of a tangent, the CLIP portion of the project has been very helpful in introducing me to how important optimising the data preprocessing pipeline is - especially the data augmentation part. By preprocessing in parallel and reshuffling some things, I was able to bump the average usage of my Nvidia GeForce 3060 GPU from around 10% to well over 80%, speeding up the process of augmenting the data from ~10 minutes per tweet to just 1.5 seconds per tweet! It's well worth spending a few hours on your data processing pipeline if you know you'll be training and retraining your model a bunch of times as you tweak it, as you could save yourself many hours of training time.

A number of key things to watch out for that I've found so far, in no particular order:

  • Preprocessing data in parallel is very important. You can usually boost performance by as many times as you have CPU cores!
  • Reading data from a stream makes it awkward to parallelise. It's much easier and simpler to handle e.g. 1 image per file than a stream of images in a single file.
  • Image decoding is expensive, meaning that you'll most likely hit a CPU bottleneck if your model handles images. Ensuring images are JPEG can help, as PNGs are more expensive to decode.
    • Similarly, the image decoder you use can significantly affect performance. I used simplejpeg, but I've heard that if you wrap Tensorflow's native image decoding in an input pipeline that can also be good as it can compile it into something more efficient. Test different methods with your own dataset to see which is best.
  • Given that your preprocessing pipeline will run for every epoch, investigate if you can do any expensive steps just once before training begins.

In the future I'd like to write a blog post that more thoroughly compares PyTorch and Tensorflow now that I have more experience with both of them. They have different strengths and weaknesses which make them both good fits for different types of models and projects.

All this experience will be very useful indeed when I turn my attention back to the rainfall radar portion of my project. My current plan is to investigate training a CLIP model to comparatively train the rainfall radar + heightmap and the water depth data against one another. As of now I haven't looked into the specifics and details of how CLIP's training process actually works, but I'm hoping it's not too complicated to either re-use their code or implement my own.

In training such a CLIP model, it should in theory tell me whether there's any relationship between the two at all that a model can learn. If there is, then I can then move on to the next step and connect a decoder of some description to the model that will produce an image as an output. If anyone has any good resources on this, please do comment below as I'm rather unsure as to where to begin (I've tried an autoencoder design in the past for this model - albeit without CLIP - and it didn't go very well).

Conclusion

Since last time, I've trained a bunch of CLIP models, and compared these (in more ways than one) to the transformer encoder I trained earlier. To extract useful information from images, a different strategy is likely needed as it doesn't appear that they contain much useful information about sentiment in the context of a flooding situation.

In training the CLIP models however, I've gained a lot of very valuable experience that will greatly help me in implementing an efficient model and pipeline for the rainfall radar half of my project. If I could go back and do this all again, I would have started the social media half of my project first, as it's taught me a whole bunch of very useful things that would have saved me a lot of time on my rainfall radar project....

If you've found this interesting, are confused about anything here, or have any suggestions, please do comment below! I'd love to hear from you.

Hackathon in AI for Sustainability 2022

The other week, I took part in the Hackathon in AI for Sustainability 2022. While this was notable because it was my first hackathon, what was more important was that it was partially based on my research! For those who aren't aware, I'm currently doing a PhD at the University of Hull with the project title "Using Big Data and AI to Dynamically Predict Flood Risk". While part of it really hasn't gone according to plan (I do have a plan to fix it, I just need to find time to implement it), the second half of my project on social media has been coming together much more easily.

To this end, my supervisor asked me about a month ago whether I wanted to help organise a hackathon, so I took the plunge and said yes. The hackathon has 3 projects for attendees to choose from:

  • Project 1: Hedge identification from earth observation data with interpretable computer vision algorithms
  • Project 2: Monopile fatigue estimation from nonlinear waves using deep learning
  • Project 3: Live sentiment tracking during floods from social media data (my project!)

When doing research, I've found that there are often many more avenues to explore than there is time to explore them. To this end, a hackathon is an ideal time to explore these avenues that I have not had the time to explore previously.

To prepare, I put together some dataset of tweets and associated images - some from the models I've actually trained, and others (such as one based on the hashtag #StormFranklin) that I downloaded specially for the occasion. Alongside this, I also trained and prepared a model and some sample code for students to use as a starting point.

On the first day of the event, the leaders of the 3 projects presented the background and objectives of the 3 projects available for students to choose from, and then we headed to the lab to get started. While unfortunate technical issues were a problem for all 3 projects, we managed to find ways to work around them.

Over the next few days, the students participating in the hackathon tackled the 3 projects and explored different directions. At first, I wasn't really sure about what to do or how to help the students, but I soon started to figure out how I could assist students by explaining things, helping them with their problems, fetching and organising more data, and other such things.

While I can't speak for the other projects, the outputs of the hackathon for my project are fascinating insights into things I haven't had time to look into myself - and I anticipate that we'll be may be able to draw them together into something more formal.

Just some of the approaches taken in my project include:

  • Automatically captioning images to extract additional information
  • Using other sentiment classification models to compare performance
    • VADER: A rule-based model that classifies to positive/negative/neutral
    • BART: A variant of BERT
  • Resolving and inferring geolocations of tweets and plotting them on a map, with the goal of increasing relevance of tweets

The outputs of the hackathon have been beyond my wildest dreams, so I'm hugely thankful to all who participated in my project as part of the hackathon!

While I don't have many fancy visuals to show right now, I'll definitely keep you updated with progress on drawing it all together in my PhD Update blog post series.

A learning experience | AAAI-22 in review

Hey there! As you might have guessed, it's time for my review of the AAAI-22 conference(?) (Association for the Advancement of Artificial Intelligence) I attended recently. It's definitely been a learning experience, so I think I've got my thoughts in order in a way that means I can now write about them here.

Attending a conference has always been on the cards - right from the very beginning of my PhD - but it's only recently that I have had something substantial enough that it would be worth attending one. To this end, I wrote a 2 page paper last year and submitted it to the Doctoral Consortium, which is a satellite event that takes place slightly before the actual AAAI-22 conference. To my surprise I got accepted!

Unfortunately in January AAAI-22 was switched from being an in-person conference to being a virtual conference instead. While I appreciate and understand the reasons why they made that decision (safety must come first, after all), it made some things rather awkward. For example, the registration form didn't mention a timezone, so I had to reach out to the helpdesk to ask about it.

For some reason, the Doctoral Consortium wanted me to give a talk. While I was nervous beforehand, the talk itself seemed to go ok (even though I forgot to create a slide somewhere in the middle) - people seemed to find the subject interesting. They also assigned a virtual mentor to me as well, who was very helpful in checking my slide deck for me.

The other Doctoral Consortium talks were also really interesting. I think the one that stood out to me was "AI-Driven Road Condition Monitoring Across Multiple Nations" by Deeksha Arya, in which the presenter was using CNNs to detect damage to roads - and found that a model trained on data from 1 country didn't work so well in another - and talked about ways in which they were going to combat the issue. The talk on "Creating Interpretable Data-Driven Approaches for Tropical Cyclones Forecasting" by Fan Meng also sounded fascinating, but I didn't get a chance to attend on account of their session being when I was asleep.

As part of the conference, I also submitted a poster. I've actually done a poster session before, so I sort of knew what to expect with this one. After a brief hiccup and rescheduling of the poster session I was part of, I got a 35 minute slot to present my poster, and had some interesting conversations with people.

Technical issues were a constant theme throughout the event. While the Doctoral Consortium went well on Zoom (there was a last minute software change - I'm glad I took the night before to install and check multiple different video conferencing programs, otherwise I wouldn't have made it), the rest of the conference wasn't so lucky. AAAI-22 was held on something called VirtualChair / Gather.town, which as it turned out was not suited to the scale of the conference in question (200 people in each room? yikes). I found myself with the seemingly impossible task of using a website that was so laggy it was barely usable - even on my i7-10750H I bought back in 2020. While the helpdesk were helpful and suggested some things I could try, nothing seemed to help. This severely limited the benefit I could gain from the conference.

At times, there were also a number of communication issues that made the experience a stressful one. Some emails contradicted each other, and others were unclear - so I had to email the organisers at multiple points to request clarification. The wording on some of the forms (especially the registration form) left a lot to be desired. All in all, this led to a very large number of wasted hours figuring things out and going back and forth to resolve confusion.

It also seemed as though everyone appeared to assume that I knew how a big conference like this worked and what each event was about, when this was not the case. For example, after the start of the conference I received an email saying that they hoped I'd been enjoying the plenary sessions, when I didn't know that plenary sessions existed, let alone what they were about. Perhaps in future it would be a good idea to to distribute a beginner's guide to the conference - perhaps by email or something.

For future reference, my current understanding of the different events in a conference is as follows:

  • Doctoral Consortium: A series of talks - perhaps over several sessions - in which PhD students submit a 2 page paper in advance and then present their projects.
  • Workshop: A themed event in which a bunch of presenters submit longer papers and talk about their work
  • Tutorial: In which the organisers deliver content centred around a specific theme with the aim of educating the audience on a particular topic
  • Plenary session: While workshops and tutorials may run in parallel, plenary sessions are talks at a time when everyone can attend. They are designed to be general enough that they are applicable to the entire audience.
  • Poster session: A bunch of people create a poster about their research, and all of these posters are put up in a room. Then, researchers are designated specific sessions in which they stand by their poster and people come by and chat with them about their research. At other times, researchers are free to browse other researchers' papers.

Conclusion

Even though the benefit from talks, workshops, and other activities at the conference directly has been extremely limited due to technical, communication, and timezoning issues, the experience of attending this conference has been a beneficial one. I've learnt about how a conference is structured, and also had the chance to present my research to a global audience for the first time!

In the future, I hope that I get the chance to attend my first actual conference as I feel I'm much better prepared, and have a better understanding as to what I'm getting myself in for.

PhD Update 12: Is it enough?

Hey there! It's another PhD update blog post! Sorry for the lack of posts here, the reason why will become apparent below. In the last post, I talked about the AAAI-22 Doctoral Consortium conference I'll be attending, and also about sentiment analysis of both tweets and images. Before we talk about progress since then, here's a list of all the posts in this series so far:

As in all the posts preceding this one, none of the things I present here are finalised and are subject to significant change as I double check everything. I can think of no better example of this than the image classification model I talked last time - the accuracy of which has dropped from 75.6% to 60.7% after I fixed a bug....

AAAI-22 Doctoral Consortium

The most major thing in my calendar in the next few weeks is surely the AAAI-22 Doctoral Consortium. I've been given a complimentary registration to the main conference after I had a paper accepted, so as it turns out the next few weeks are going to be rather busy - as have been the last few weeks preparing for this conference.

Since the last post when I mentioned that AAAI-22 has been moved to be fully virtual, there have been a number of developments. Firstly, the specific nature of the doctoral consortium (and the wider conference) is starting to become clear. Not having been to a conference before (a theme which will come up a lot in this post), I'm not entirely sure what to expect, but as it turns out I was asked to create a poster that I'll be presenting in a poster session (whether this is part of AAAI-22 or the AAAI-22 Doctoral Consortium is unclear).

I've created one with baposter (or a variant thereof given to me by my supervisor some time ago) which I've now submitted. The thought of presenting a poster in a poster session at a conference to lots of people I don't know has been a rather terrifying thought though, so I've been increasingly anxious over this over the past few weeks.

This isn't the end of the story though, as I've also being asked to do a 20 minute presentation with 15 minutes for questions / discussion, the preparations for which have taken perhaps longer than I anticipated. Thankfully I've had prior experience presenting at my department's PGR seminars previously (which have also been online recently), so it's not as daunting as would otherwise be, but as with the poster there's still the fear of presenting to lots of people I don't know - most of which probably know more about AI than I do!

Despite these fears and other complications that you can expect from an international conference (timezones are such a pain sometimes), I'm looking forward to seeing what other people done, and also hoping that the feedback I get from my presentation and poster isn't all negative :P

Sentiment analysis: a different perspective

Since the last post, a number of things have changed in my approach to sentiment analysis. The reason for this is - as I alluded to earlier in this post - after I fixed a bug in my model that predicts the sentiment of images from twitter, causing the validation accuracy to drop from 75.6% to 60.7%. As of now, I have the following models implemented for sentiment analysis:

  1. Text sentiment prediction
    • Status: Implemented, and works rather well actually - top accuracy is currently 79.6%
    • Input: Tweet text
    • Output: Positive/negative sentiment
    • Architecture: Transformer encoder (previously: LSTM)
    • Labels: Emojis from text [manually sorted into 2 categories]
  2. Image sentiment prediction
    • Status: Implemented, but doesn't work very well
    • Input: Images attached to tweets
    • Output: Positive/negative sentiment
    • Architecture: ResNet50 (previously: CCT, but I couldn't get it to work)
    • Labels: Positive/negative sentiment predictions from model #1
  3. Combined sentiment prediction
    • Status: Under construction
    • Input: Tweet text, associated image
    • Output: Positive/negative sentiment
    • Architecture: CLIP โ†’ a few dense layers [provisionally]
    • Labels: Emojis from text, as in model #1

Not mentioned here of course is my rainfall radar model, but that's still waiting for me to return to it and implement a plan I came up with some months ago to fix it.

Of particular note here is the new CLIP-based model. After analysing model #2 (image sentiment prediction) and sampling some images from each category on a per-flood basis, it soon became clear that the output was very noisy and wasn't particularly useful, and combined with it's rather low performance means that I'm pretty much considering it a failed attempt.

With this in mind, after my supervisor suggested I looked into CLIP. While I mentioned it in the paper I've submitted to AAAI-22, until recently I haven't had a clear picture of how it would be actually useful. As a model, CLIP trains on text-image pairs, and learns to identify which image belongs to which text caption. To do this, it has 2 encoders - 1 for the text, and 1 for the associated image.

You can probably guess where this is going, but my new plan here is to combine the text and images of the tweets I have downloaded into 1 single model rather than 2 separate ones. I figure that I can potentially take advantage of the encoders trained by the CLIP models to make a prediction. If I recall correctly, Ive read a paper that has done this before with tweets from twitter - just not with CLIP - but unfortunately I can't locate the paper at this time (I'll edit this post if I do find it in the future - if you know which paper I'm talking about please do leave a DOI link in the comments below).

At the moment, I'm busy implementing the code to wrap the CLIP model and make it suitable for my specific learning task, but this is as of yet incomplete. As noted in the architecture above, I plan on concatenating the output from the CLIP encoders and passing it through a few dense layers (as the [ batch_size, concat, dim ] tensor is not compatible with a transformer, which operates on sequences), but I'm open to suggestions - please do comment below.

My biggest concern here is that the tweet text will not be enough like an image caption for CLIP to produce useful results. If this is the case, I have some tricks up my sleeve:

  1. Extract any alt text associated with images (yes, twitter does let you do this for images) and use that instead - though I can't imagine that people have captioned images especially often.
  2. Train a new CLIP model from scratch on my dataset - potentially using this model architecture - this may require quite a time investment to get the model working as intended (Tensorflow can be confusing and difficult to debug with complex model architectures).

Topic analysis

Another thing I've explored is topic analysis using gensim's LDAModel. Essentially, as far as I can tell LDA is an unsupervised algorithm that groups the words found in the source input documents into a fixed number of related groups, each of which contains words which are often found in close proximity to one another.

As an example, if I train an LDA model for 20 groups, I get something like the following for some of those categories:

place   death   north   evacu   rise    drive   hng coverag toll    cumbria
come    us  ye  issu    london  set line    offic   old mean
rescu   dead    miss    uttarakhand leav    bbc texa    victim  kerala  defenc
flashflood  awai    world   car train   make    best    turn    school  boat
need    town    problem includ  effect  head    assist  sign    philippin   condit

(Full results may be available upon request, if context is provided)

As you can tell, the results of this are mixed. While it has grouped the words, the groups themselves aren't really what I was hoping for. The groups here seem to be quite generic, rather than being about specific things (words like rescu, dead, miss, and victim in a category), and also seem to be rather noisy (words like "north", "old", "set", "come", "us").

My first thought here was that instead of training the model on all the tweets I have, I might get better results if I train it on just a single flood at once - since some categories are dominated by flood-specific words (hurrican hurricaneeta, hurricaneiota for example). Here's an extract from the results of that for 10 categories for the #StormDennis hashtag:

warn    flood   stormdenni  met issu    offic   water   tree    high    risk
good    hope    look    love    iโ€™m game    morn    dai yellow  nice
ye  lol head    enjoi   tell    book    got chanc   definit valentin
mph gust    ireland coast   wors    wave    brace   wow doesnโ€™t photo

(Again, full results may be available on request if context is provided)

Again, mixed results here - and still very noisy (though I'd expect nothing else from social media data), though it is nice to see something like gust, ireland, and coast there in the bottom category.

The goal here was to attempt to extract some useful information about which places have been affected and by how much by combining this with the sentiment analysis (see above), but my approach here doesn't seem to have captured what I intended. Still, it does seem that on a per-category basis (for all tweets) there is some difference in sentiment on a per-category basis:

(Above: A char of the sentiment of each LDA topic, using the 20 topic model that was trained on all available tweets.)

It seems as if there's definitely something going on here. I speculate that positive words are more often than not used near other positive words, and so categories are more likely to skew to 1 extreme or another - though acquiring proof of this theory would likely require a significant time investment.

While LDA topic analysis was an interesting diversion, I'm not sure how useful it is in this context. Still, it's a useful thing to have in my growing natural language processing toolkit (how did this happen) - and perhaps future research problems will benefit from it more.

Moving forwards, I could imagine instead of doing LDA topic analysis it might be beneficial to group tweets by place and perhaps run some sentiment analysis on that instead. This comes with it's own set of problems of course (especially pinning down / inferring the location of the tweets in question), but this is not an insurmountable problem given strategies such as named entity recognition (which the twitter Academic API does for you, believe it or not).

Conclusion

While most of my time has been spent on preparing for the AAAI-22 conference, I have managed to do some investigating into the twitter data I've downloaded. Unfortunately, not all my methods have been a success (image sentiment analysis, LDA topic analysis), but these have served as useful exercises for both learning new techniques and understanding the dataset better.

Moving forwards, I'm going to implement a new CLIP-based model that I'm hoping will improve accuracy over my existing models - though I'm somewhat apprehensive that tweet text won't be descriptive enough for CLIP to produce a useful output.

With the AAAI-22 conference happening next week, I'll be sure to write up my experiences and post about the event here. I'm just hoping that I've done enough to make attending a conference like this worth it, and that what I have done is actually interesting to people :-)

Sources and further reading

PhD Update 11: Answers to our questions

Heya! It's time for another PhD update blog post. Sometimes, answers to the questions one ask some in the form of more questions. In this post, as I predicted in the last post I'll be talking mainly about my work with tweets from twitter, as I haven't yet had time to return to the Temporal CNN Autoencoder. Before we start though, here's a list of all the parts in this series so far:

As usual, none of the things I present here are finalised, and are subject to significant change as I double check everything.

Conferences part 2

In the last post, I talked about the conferences I have applied to and not applied to. In this one, I can now say that I have been accepted for the AAAI-22 Doctoral Consortium! It was going to be held in Vancouver, Canada - but has since been moved to be fully virtual and online. While I both understand the reasoning behind the decision and am relieved I don't have to travel, it is a bit of a shame that I won't get the chance to have those face-to-face conversations you don't get when in a video call.

Despite the move to being in person, I'm still both excited to attend and mildly terrified about presenting the things I've been doing to a potentially large audience.

Looking forward, at the suggestion of my supervisor I plan to finish writing up that AI+HADR paper I mentioned in the last update as a journal article instead, and then submit that. While I'm working through the review process for that paper, I'll return to the Temporal Autoencoder / rainfall radar subproject and work on implementing my idea for it.

Tweet sentiment analysis

After the rather rushed analysis of the data in the last post, I've now taken the time to analyse the data more thoroughly. A number of things became apparent here, but first let's look at the research questions I've asked:

  1. Is there a more negative response to more sudden / severe floods?
  2. Can we classify images by the sentiment of the associated tweet?

Answering these questions has not been straightforward, but I'm now at a point where I have some preliminary answers (which, I stress, are not double checked and not peer reviewed).

Unfortunately, the answer to the first question there is that it can't easily be answered with the current data available. As it turns out, I have been unable to find an objective and consistent measure of how severe or sudden a flood was.

One might think that say the amount of insurance damages would be a good choice. This doesn't work out though, because not all flooding event have (public) damage estimates. Those that do are often measured against different goalposts: flood A might be measured in property damages, and flood B in economic impact.

Another measure I investigated was the number of homes destroyed or people displaced. This too as it turns out has multiple issues. For one, while for some floods multiple estimates are available they don't always agree. For a single given flood estimates might range from 800 to 2400 homes destroyed, and are often measured at different points in the history of the given flooding event, and it's sometimes unclear at what stage in a flood's lifecycle the estimate was made.

Even if such estimates were consistent (which they really aren't), there's another issue too: they are often limited to country borders. For example, a government agency may estimate the number of homes destroyed for their country, but not other countries. This is totally reasonable: a government is concerned first and foremost with the people within it's borders. Sadly floods, storms, and hurricanes rarely discriminate across such borders. Take Storm Christoph for example. It hit the UK in January 2021, but after that it continued on and hit Scandinavia too.

The other question above is thankfully much easier to answer - it deserves it's own section in this post though, so see below for that. It's not all bad new on the tweet sentiment analysis front though - I did find some unexpected results while I was analysing the data. To explain, let's look at a fancy new chart I've plotted:

(Above: Various floods and their overall sentiment - both with replies included and excluded.)

This fancy chart shows the overall sentiment of a number of different flooding events, with replies included (going down) and excluded (going up). What's fascinating here is that it appears to suggest that the replies contribute significantly towards the percentage of positive tweets. Perhaps people are most likely to tweet words of encouragement (e.g. "stay safe :hugs:")?

With this in mind, I correlated the total number of tweets made in a flood with the overall sentiment (as a percentage), and got a Pearson correlation coefficient of -0.54, which indicates a medium correlation. Apparently, if more people tweet about a flood, it's more likely to have a more positive overall sentiment. If replies are excluded, it works out to -0.31, which would indicate a weaker negative correlation.

Image classification

Another thing I've been working on is classifying images associated with tweets using the sentiment of the tweet as a label. With roughly ~175K images to work with, this has proved to be a useful exercise - resulting in ~75.23% validation accuracy (rising to 96.9% training accuracy by epoch 50%, suggesting it's overfitting and I need more data to improve validation accuracy any further) over 9 epochs. While unfortunately I can't share a sample of positive/negative images predicted by the model due to data privacy rules, I can talk about the structure of the model and show a confusion matrix.

I started out with a Compact Convolutional Transformer model as it looks very cool and has some significant benefits over more traditional model architectures, there's a bug in my implementation somewhere I can't spot, and it only yields 10% to 20% accuracy on Fashion MNIST. To avoid wasting too much time, I'm now using a prebuilt ResNet50 initialised with random weights that takes in images in the size 128 x 128 pixels (images larger or smaller than this are automatically resized without preserving aspect ratio).

While the accuracy of the model is slightly lower than that of the model that predicts the sentiment of the tweets themselves, looking at samples that I quickly threw together with a bit of Bash and ImageMagick and the confusion matrix (see below) reveals that it is in fact doing something useful. Here's that Bash 2-liner:

shuf path/to/tweets-labelled.tsv | awk '/positive$/ { print("/absolte/path/to/media_dir/" $1); }' | head -n64 >/tmp/sample-pos.txt;
montage @/tmp/sample-pos.txt -geometry 960x540+10+10 -tile 8x8 /tmp/sample-pos.jpeg

The @path/to/file.txt syntax in the montage command call there reads a list of filepaths from a file instead of directly specifying them on the command line. By replacing positive in the awk filter with negative and sample-pos.txt to sample-neg.txt, the same procedure can also generate a random sample for the negative category too.

From looking at a single sample of 64 positive images and 64 negative images generated using the method above, positive images generally include:

  • Cats (by far the most popular of course)
  • Cupcakes
  • People, some of which are helping others in a flood

Whereas negative images generally include:

  • Floods
  • Damage to homes, buildings, etc

My next immediate step here is to plot a confusion matrix to better understand how the model is performing, as I'm slightly concerned that it's ignoring the minority class (in this case, positive tweets). I've already mostly completed this already, but it's just not quite ready to show here yet as I need to double check and revise some stuff.

Of course, given that the source dataset is very noisy (social media data generally is) and relatively difficult for AI models to understand, I think this is a good result.

From here, if I have time I'd like to combine this image classification model with the earlier tweet sentiment analyser model to create a single model that can more accurately predict the sentiment of both the text of a tweet and the associated image at the same. To do this, I'm probably going to investigate and use CLIP - more on this in a future post.

Conclusion

While I still haven't done anything with the Temporal Autoencoder due to other priorities, I'm hoping to return to it once I've wrapped up this social media section of the project. I have made significant progress on analysing the social media data - both the textual tweets and the associated images, and I plan to combine the models I've trained to classify both text and images into a single model. It's not the end of the road yet though: while I've found some answers, they are just leading to more questions.

PhD Update 10: Sharing with the world

Hey there - is it that time already? Another PhD update blog post! And in double digits too! In this one, I'll be talking mainly about the social media project I've been working on, as I haven't had time to work on the temporal CNN much since last time. I've also taken some time off in August, so technically this is only ~just over 1 month's worth of progress here.

Before we begin, here's a list of posts in this series so far. They give useful context - you probably won't understand this one without reading them first.

As with the last post, none of the graphs here are finalised, and are work in progress. There ~~may~ probably are multiple nasty bugs that invalidate the results that I haven't found yet.

AAAI Doctoral Consortium 2022

The main thing I've done since last time is apply for the AAAI Doctoral Consortium 2022. As I understand it, this is a specific part of the main AAAI conference (Association for the Advancement of Artificial Intelligence) that is designed for PhD student researchers. To apply, you have to submit a cover page, your CV, and a 2 page thesis summary.

The cover page wasn't too bad, and updating my CV and getting it checked by the careers service at my university was simply a case of doing it. The thesis summary on the other hand was more of a challenge than I expected - it's quite difficult to summarise your entire 3 years of (planned) work in just 2 pages! It helped me to picture it as a high-level overview / conversation starter rather than a free-standing paper in it's own right - even if it looks like one.

Although this took longer than I thought to prepare, I did in fact get my submission together in time. I was glad that I left some extra time at the end though, as the rules were both very strict for what you can and can't do with your paper and unclear since they were written for the main AAAI conference. In addition I found at least 2 mistakes in the instructions and rules which appeared to be left over from previous years and hadn't been updated.

AI + HADR 2021

The other conference which I attempted to apply for at the suggestion of my supervisor is AI + HADR 2021. This stands for Artificial Intelligence for Humanitarian Assistance and Disaster Response, and it's virtual workshop that is happening in December 2021. The submission calls for a 6 page paper (5 for content, and 1 for references) on things related to disaster response.

My plan here was to write a paper on the social media work I've been doing and submit that, with the idea of talking about the existing sentiment analysis I've done (see update 9) and focusing on answering a research question using the sentiment analysis model I've implemented.

Unfortunately, things didn't go to plan as I ran out of time - even with my supervisor helping to write some of the parts of he paper. After finding a number of bugs in my data processing pipeline and failing to see any obvious trends in the sentiment analysis graphs I plotted, I realised that it was unlikely that I was going to be able to submit for it this time around.

The plan from here is to take some more time over it and answer my other research question instead, and tidy up and re-use our existing unfinished submission here for IJCNN 2022 (I think that's the right link) - the the submission deadline for which I think is going to be in January 2021.

Social media

While I've spent time writing submissions for various conferences and workshops, I have also been doing a bit of social media data analysis too.

To start with, I implemented a new endpoint for labelling tweets using a given saved model checkpoint. After using this to label the various datasets I've acquired (with the best transformer-based model checkpoint I have, which I think is ~78% accurate (got to double check everything to make sure I haven't mixed anything up), I then got to work plotting some graphs. To start with, I plotted a simple bar graph of the overall sentiment of the different datasets I've downloaded.

Graph showing the overall sentiment of some of the floods in my dataset.

(Above: A bar graph showing the overall sentiment of some of the floods in my dataset.)

I'm not really sure what to make of this - I suspect context-specific information is required to fully interpret this. I couldn't find any reliable context-specific information on short notice for the AI + HADR paper, so I'm going to attempt to keep looking if I can find the time to do so. Asking someone form the energy and environment institute may be a good idea.

After this, I binned the tweets over time and used this to plot a combined graph showing both the tweet frequency and sentiment over time using Gnuplot.

Graph showing the frequency (blue) and sentiment (green and red) of tweets over time

(Above: A graph showing the frequency (blue) and sentiment (green and red) of tweets over time.)

Again here, I'm not sure what to make of this. Even with cropping out the long tail of people talking about the flooding even afterwards to make it easier to see the sentiment over time as the actual event occurred doesn't seem to help uncover any clear trends. It could be said that for Hurricane Iota that the sentiment got more positive over time at the beginning, but this does not really also hold true for Storm Christoph and others - and without context-specific information it's difficult to tell if there are any meaningful conclusions that can be drawn here.

To this end, after talking with my supervisor we've got some idea of things I'm going to try - so more on this in an upcoming PhD update blog post.

Finally, I've also had a discussion with my supervisor and when I'm ready to publish something on my social media work, I'm going to make the code behind it open source (probably either GPLv3 or MPL-2.0). If you're interested, the code for downloading tweets using Twitter's Academic API is already open source: https://www.npmjs.com/package/twitter-academic-downloader.

Temporal CNN

I haven't really done anything on the Temporal CNN since last time, but I wanted to make sure it wasn't left out of this post! It's definitely still on the cards - the plan is that once I've got this data analysis done and some meaningful social media results, I'm going to return the the Temporal CNN and put the plan I described in the last post into action.

Conclusion

Since last time I've mainly been writing and analysing social media data. While I didn't manage to apply to AI + HADR 2021, I did manage to submit for AAAI Doctoral Consortium 2022 - I'll find out if I've been accepted on the 15th October 2021.

Next up, I'm going to be working on answering my other research question first - more on this in a later blog post. If I have time, I'll put some effort into the Temporal CNN - though I doubt I'll have anything to show on that front next time. Finally, I'm going to be arranging my PhD Panel 4 (where did all the time go?) - hopefully for before the end of November, availability of those involved permitting.

If you are finding this series of blog posts on my PhD interesting, please do comment below. It's great to see that the stuff I'm working on for my PhD is actually interesting to someone.

Sources and further reading

PhD, Update 9: Results?

Hey, it's time for another PhD update blog post! Since last time, I've been working on tweet classification mostly, but I have a small bit to update you on with the Temporal CNN.

Here's a list of posts in this series so far before we continue. If you haven't already, you should read them as they give some contact for what I'll be talking about in this post.

Note that the results and graphs in this blog post are not final, and may change as I further evaluate the performance of the models I've trained (for example I recently found and fixed a bug where I misclassified all tweets without emojis as positive tweets by mistake).

Tweet classification

After trying a bunch of different tweaks and variants of models, I now have an AI model that classifies tweets. It's not perfect, but I am reaching ~80% validation accuracy.

The model itself is trained to predict the sentiment of a tweet, with the label coming from a preset list of emojis I compiled manually (looking at a list of emojis in the dataset itself I wrote a quick Bash one-liner to calculate). Each tweet is classified by adding up how any emojis in each category it has in it, and the category with the most emojis is then ultimately chosen as the label to predict. Here's the list of emojis I've compiled:

Category Emojis
positive ๐Ÿ˜‚๐Ÿ‘๐Ÿคฃโค๐Ÿ‘๐Ÿ˜Š๐Ÿ˜‰๐Ÿ˜๐Ÿ˜˜๐Ÿ˜๐Ÿค—๐Ÿ’•๐Ÿ˜€๐Ÿ’™๐Ÿ’ชโœ…๐Ÿ™Œ๐Ÿ’š๐Ÿ‘Œ๐Ÿ™‚๐Ÿ˜Ž๐Ÿ˜†๐Ÿ˜…โ˜บ๐Ÿ˜ƒ๐Ÿ˜ป๐Ÿ’–๐Ÿ’‹๐Ÿ’œ๐Ÿ˜น๐Ÿ˜œโ™ฅ๐Ÿ˜„๐Ÿ’›๐Ÿ˜ฝโœ”๐Ÿ‘‹๐Ÿ’—โœจ๐ŸŒน๐ŸŽ‰๐Ÿ‘Š๐Ÿ˜‹๐Ÿ˜๐ŸŒž๐Ÿ˜‡๐ŸŽถโญ๐Ÿ’ž๐Ÿ˜บ๐Ÿ˜ธ๐Ÿ–ค๐ŸŒธ๐Ÿ’๐Ÿ€๐ŸŒผ๐ŸŒŸ๐Ÿค๐ŸŒท๐Ÿฑ๐Ÿค“๐Ÿ˜Œ๐Ÿ˜›๐Ÿ˜™
negative ๐Ÿฅถโš ๐Ÿ’”๐Ÿ˜ฐ๐Ÿ™„๐Ÿ˜ฑ๐Ÿ˜ญ๐Ÿ˜ณ๐Ÿ˜ข๐Ÿ˜ฌ๐Ÿคฆ๐Ÿ˜ก๐Ÿ˜ฉ๐Ÿ™ˆ๐Ÿ˜”โ˜น๐Ÿ˜ฎโŒ๐Ÿ˜ฃ๐Ÿ˜•๐Ÿ˜ฒ๐Ÿ˜ฅ๐Ÿ˜ž๐Ÿ˜ซ๐Ÿ˜Ÿ๐Ÿ˜ด๐Ÿ™€๐Ÿ˜’๐Ÿ™๐Ÿ˜ ๐Ÿ˜ช๐Ÿ˜ฏ๐Ÿ˜จ๐Ÿ‘Ž๐Ÿคข๐Ÿ’€๐Ÿ˜ค๐Ÿ˜๐Ÿ˜–๐Ÿ˜๐Ÿ˜ˆ๐Ÿ˜‘๐Ÿ˜“๐Ÿ˜ฟ๐Ÿ˜ต๐Ÿ˜ง๐Ÿ˜ถ๐Ÿ˜ฆ๐Ÿคฅ๐Ÿค๐Ÿคง๐ŸŒงโ˜”๐Ÿ’จ๐ŸŒŠ๐Ÿ’ฆ๐Ÿšจ๐ŸŒฌ๐ŸŒชโ›ˆ๐ŸŒ€โ„๐Ÿ’ง๐ŸŒจโšก๐Ÿฆ†๐ŸŒฉ๐ŸŒฆ๐ŸŒ‚

These categories are stored in a tab separated values file (TSV), allowing the program I've written that trains new models to be completely customisable. Given that there were a significant number of tweets with flood-related emojis (๐ŸŒงโ˜”๐Ÿ’จ๐ŸŒŠ๐Ÿ’ฆ๐Ÿšจ๐ŸŒฌ๐ŸŒชโ›ˆ๐ŸŒ€โ„๐Ÿ’ง๐ŸŒจโšก๐Ÿฆ†๐ŸŒฉ๐ŸŒฆ๐ŸŒ‚) in my dataset, I tried separating them out into their own class, but it didn't work very well - more on this later.

The dataset I've downloaded comes from a number of different queries and hashtags, and is comprised of just under 1.4 million tweets, of which ~90K tweets have at least 1 emoji in the above categories.

A pair of pie charts showing statistics about the twitter dataset.

Tweets were downloaded with a tool I implemented using a number of flood related hashtags and query strings - from generic ones like #flood to specific ones such as #StormChristoph.

Of the entire dataset, ~6.44% contain at least 1 emoji. Of those, ~49.57~ are positive, and ~50.434% are negative:

Category Number of tweets
No emoji 1306061
Negative 45367
Positive 44585
Total tweets 1396013

The models I've trained so far are either a transformer, or an 2-layer LSTM with 128 units per layer - plus a dense layer with a softmax activation function at the end for classification. If batch normalisation, every layer except the dense layer has a batch normalisation layer afterwards:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
bidirectional (Bidirectional (None, 100, 256)          336896    
_________________________________________________________________
batch_normalization (BatchNo (None, 100, 256)          1024      
_________________________________________________________________
bidirectional_1 (Bidirection (None, 256)               394240    
_________________________________________________________________
batch_normalization_1 (Batch (None, 256)               1024      
_________________________________________________________________
dense (Dense)                (None, 2)                 514       
=================================================================
Total params: 733,698
Trainable params: 732,674
Non-trainable params: 1,024
_________________________________________________________________

I tried using more units and varying the number of layers, but it didn't help much (in hindsight this was before I found the bug where tweets without emojis were misclassified as positive).

Now for the exciting part! I split the tweets that had at least 1 emoji into 2 datasets: 80% for training, and 20% for validation. I trained a few different models tweaking various different parameters, and got the following results:

2 categories: training accuracy

2 categories: validation accuracy

Each model has a code that is comprised of multiple parts:

  • c2: 2 categories
  • bidi: LSTM, bidirectional wrapper enabled
  • nobidi: LSTM, bidirectional wrapper disabled
  • batchnorm: LSTM, batch normalisation enabled
  • nobatchnorm: LSTM, batch normalisation disabled
  • transformer: Transformer (just the encoder part)

It seems that if you're an LSTM being bidirectional does result in a boost to both stability and performance (performance here meaning how well the model works, rather than how fast it runs - that would be efficiency instead). Batch normalisation doesn't appear to help all that much - I speculate this is because the models I'm training really aren't all that deep, and batch normalisation apparently helps most with deep models (e.g. ResNet).

For completeness, here are the respective graphs for 3 categories (positive, negative, and flood):

3 Categories: Training accuracy

3 Categories: Validation accuracy

Unfortunately, 3 categories didn't work quite as well as I'd hoped. To understand why, I plotted some confusion matrices at epochs 11 (left) and 44 (right). For both categories I've picked out the charts for bidi-nobatchnorm, as it was the best performing (see above).

2 Categories: confusion matrices

3 Categories: confusion matrices

The 2 category model looks to be reasonably accurate with no enormous issues as we expected from the ~80% accuracy from above. Twice the number of negative tweets are misclassified as positive compared to the other way around of some reason - it's possible that negative tweets are more difficult to interpret. Given the noisy dataset (it's social media and natural language, after all), I'm pretty happy with this result.

The 3 category model has fared less brilliantly. Although we can clearly see it has improved over the training process (and would likely continue to do so if I gave it longer than 50 epochs to train), it's accuracy is really limited by it's tendency to overfit - preventing it from performing as well as the 2 category model (which, incidentally, also overfits).

Although it classifies flood tweets fairly accurately, it seems to struggle more with positive and negative tweets. This leads me to suspect that training a model on flood / not flood might be worth trying - though as always I'm limited by a lack of data (I have only ~16K tweets that contain a flood emoji). Perhaps searching twitter for flood emojis directly would help gather additional here.

Now that I have a model that works, I'm going to move on to answering some research questions with this model. To do so, I've upgraded my twitter-academic-downloader to download information about attached media - downloading attached media itself deserves its own blog post. After that, I'll be looking to publish my first ever scientific papery thing (don't know the difference between journal articles and other types of publication yet), so that is sure to be an adventure.

Temporal CNN(?)

In Temporal CNN land, things have been much slower. My latest attempt to fix the accuracy issues I've described in detail in previous posts, I've implemented a vanilla convolutional autoencoder by following this tutorial (the links to the full code behind the tutorial are broken, and it's really annoying). This does image to image translation, so it would be ideal for modifying it to support my use-case. After implementing it with Tensorflow.js, I hit a snag:

Autoencoder: expected vs actual

After trying all the loss functions I found think of and all sort of other variations of the model, for sanity's sake I tried implementing the model in Python (the original language using in the blog post). I originally didn't want to do this, since all the data preprocessing code I've written so far is all in Javascript / Node.js, but within half an hour with some copying and pasting, I had a very quick model implemented. After training a teeny model it for just 7 epochs on the Fashion MNIST dataset, I got this:

Autoencoder: epoch 7 in Tensorflow for Python

...so clearly there's something different in Tensorflow.js compared to Tensorflow for Python, but after comparing them dozens of times, I'm not sure I'll be able to spot the difference. It's a shame to give up on the Tensorflow.js version (it also has a very nice CLI I implemented), but given how long I've spent in the background on this problem, I'm going to move forwards with the Python version, I'm going to work on converting my existing data preprocessing code into a standalone CLI for converting the data into a format that I can consume in Python with minimal effort.

Once that's done, I'm going to expand the quick autoencoder Python script I implemented into a proper CLI program. This involves adding:

  • A CLI argument parser (argparse is my weapon of choice here for now in Python)
  • A settings system that has a default and a custom (TOML) config file
  • Saving multiple pieces of data to the output directory:
    • A summary of the model structure
    • The settings that are currently in use
    • A checkpoint for each epoch
    • A TSV-formatted metrics file

The existing script also spits out a sample image (such as the one above - just without the text) for each epoch too, which I've found to be very useful indeed. I'll definitely be looking into generating more visualisations on-the-fly as the model is training.

All of this will take a while (especially since this isn't the focus right now), so I'm a bit unsure if I'll get all of this done by the time I write the next post. If this plan works, then I'll probably have to come up with a new name for this model, since it's not really a Temporal CNN. Suggestions are welcome - my current thought is maybe "Temporal Convolutional Autoencoder".

Conclusion

Since the last post I've made on here, I've made much more progress on stuff than I was expecting to have done. I've now got a useful(?) model for classifying tweets, which I'm now going to move ahead with answering some research questions with (more on that in the next post - this one is long already haha). I've also got an Autoencoder training properly for image-to-image translation as a step towards getting 2D flood predictions working.

Looking forwards, I'm going to be answering research questions with my tweet classification model and preparing to write something to publish, and expanding on the Python autoencoder for more direct flood prediction.

Curious about the code behind any of these models and would like to check them out? Please get in touch. While I'm not sure I can share the tweet classifier itself yet (though I do plan on releasing as open-source after a discussion with my supervisor when I publish), I can probably share my various autoencoder implementations, and I can absolutely give you a bunch of links to things that I've used and found helpful.

If you've found this interesting or have some suggestions, please comment below. It's really motivating to hear that what I'm doing is actually interesting / useful to people.

Sources and further reading

PhD, Update 8: Eggs in Baskets

I'm back again with another PhD update blog post! Before we begin, here's a list of all the parts in the series so far:

As in the previous post, progress since last time is split in 2: The Temporal CNN, and the social media side of things. I've started to split my time more evenly between the 2 sides, as it seems like the Temporal CNN is going to take lots more work than anticipated and I'd rather not put all my eggs in 1 basket.

Temporal CNN

As you might have guessed, the Temporal CNN still isn't learning anything, but at least now I think I know what the problem is. Since last time, I've done a bunch of debugging and tests to try and figure out what the problem is. During that process, I've managed to reach a record of ~20% accuracy, which at least gives me hope that it's going to work!

Specifically, I used the MNIST (alternative site) handwriting digit dataset with my "easy" task as explain in the previous post, but with a small difference: I pre-generated 2 random tensors to serve as the "below 5" and "5 and above" targets to predict instead of a pair of tensors filled with 0s or 1s respectively. The model didn't like this at all, so this is how I now know what the problem is.

For those interested, here's the laundry list of other things I've tried since last time:

  • Giving it more data (all of 2007, with the 2013 floods as validation; made things a bit worse)
  • Found and fixed a bug in data normalisation that managed to sneak through during the rewrite (reduced training times a touch)
  • Inverting the heightmap (helped a bit)
  • Making the model deeper (Gave me a full 5% accuracy increase from 15% to 20%!)

Knowing what the problem is though is 1 thing, but solving it is another matter entirely. Thankfully, my supervisor and I have a plan to look into using a modified version of the latter half of a variational autoencoder and squidge it onto the tail end of the Temporal CNN. If it works, then I'm imagining that we'll need a new name for the Temporal CNN (suggestions?), but I'll tackle that once I've finished revising the model.

For context, a variational autoencoder is a modified "vanilla" autoencoder, and is 1 of 2 different main classes of generative AI model architecture - the other being Generative Adversarial Networks (GAN). In contrast to a GAN, a variational autoencoder does image-to-image translation with a single model, and maps an input parameter space onto an output parameter space. It first encodes the input to the model into a smaller tensor of features, before upscaling that back into an image again. In this fashion, it can learn to translate between 2 different images - for example putting glasses on people's faces.

To do this, I'm going to implement a vanilla variational autoencoder using the MNIST dataset, and once I've done this I'll then lift part of the model structure and transpose it onto the top of my existing Temporal CNN - by doing it this way I'll ensure that I have a known-good model to work with that is definitely capable of image-to-image translation.

Social Media

In other news, I've started to make some real progress on the social media side of things. I've downloaded and anonymised some tweets (the code for which is open source on npm under the package name twitter-academic-downloader - I intend to write a separate blog post about it at some point soon-ish), and I've also put together an LSTM-based model to start looking at doing some text classification.

I decided to implement said model in Python instead of Javascript, because for what I can tell Tensorflow.js doesn't come with as many batteries included as Tensorflow for Python does for natural language processing-based tasks. This has caused some interesting adventures (and a number of frustrating crashes), but I think I'm starting to get the hang of it.

In particular it's interesting coming from Tensorflow.js (which is a later project), because it seems that Tensorflow for Python is much less cohesive and more disjointed as a library compared to Tensorflow.js, which has learnt and applied lessons from the Python implementation - resulting in a much more cohesive and well thought out API. A prime example of this is the tf.Dataset vs tf.keras.Sequence in the Python version, which isn't an issue in Tensorflow.js, as in the Javascript bindings we have a single tf.Dataset.

This aside, my next step here is to train a significantly sized model that's larger than the mini model with a single layer and 100 units I've been using for testing purposes (that's my task for this afternoon - which I've likely done by the time you're reading this post).

In terms of literature, I've read a bunch more papers on the subject since last time - but I still feel like I've got more to read. Recently I read a series of papers about word embeddings (converting words into numerical tensors), which was very interesting. The process has evolved over the years, starting from a simple dictionary mapping incrementing numbers to words, to training an AI to generate said representations in increasingly sophisticated ways (starting with word2vec, then moving on to in no particular order ELMo, GloVe, and finally BERT - transformers are pretty incredible models). It was a fascinating read - I can recommend it to anyone who's interested in natural language processing (along with this excellent post)

In the model I've implemented, I've ultimately decided to go with GloVe (Global Vectors for Word Representation), as the pre-trained model is simply a text file containing a lookup table one can read into a dictionary or hash table.

Conclusion

Things have been moving forwards - albeit slowly. I've got an idea as to how I can resolve the issues I've been facing with the Temporal CNN (pending a new name once I'm done with all the modifications and I know what the model architecture is going to be like), though it's going to take a lot of work.

Things are finally starting to move in social media land - hopefully the accuracy of the LSTM-based model will be higher than that of the mini model I trained, which was only 50% on a balanced dataset - no better than blind guessing!

See you again in 2 months or so, when hopefully I'll have some real results to show (though of course I'll be keeping up with weekly posts about other things in the meantime). If you have any comments or questions about any of this - please leave a comment below! I'd love to hear your thoughts.

Sources and further reading

A much easier way to install custom versions of Python

Recently, I wrote a rather extensive blog post about compiling Python from source: Installing Python, Keras, and Tensorflow from source.

Since then, I've learnt of multiple other different ways to do that which are much easier as it turns out to achieve that goal.

For context, the purpose of running a specific version of Python in the first place was because on my University's High-Performance Computer (HPC) Viper, it doesn't have a version of Python new enough to run the latest version of Tensorflow.

Using miniconda

After contacting the Viper team at the suggestion of my supervisor, I discovered that they already had a mechanism in place for specifying which version of Python to use. It seems obvious in hindsight - since they are sure to have been asked about this before, they already had a solution in the form of miniconda.

If you're lucky enough to have access to Viper, then you can load miniconda like so:

module load python/anaconda/4.6/miniconda/3.7

If you don't have access to Viper, then worry not. I've got other methods in store which might be better suited to your environment in later sections.

Once loaded, you can specify a version of Python like so:

conda create -n py python=3.8

The -n py specifies the name of the environment you'd like to create, and can be anything you like. Perhaps you could use the name of the project you're working on would be a good idea. The python=3.8 is the version of Python you want to use. You can list the versions of Python available like so:

conda search -f python

Then, to activate the new environment, do this:

conda init bash
conda activate py
exec bash

Replace py with the name of the environment you created above.

Now, you should have the specific version of Python you wanted installed and ready to use.

Edit 2022-03-30: Added conda install pip step, as some systems don't natively have pip by default which causes issues.

The last thing we need to do here is to install pip inside the virtual conda environment. Do that like so:

conda install pip

You can also install packages with pip, and it should all come out in the wash.

For Viper users, further information about miniconda can be found here: Applications/Miniconda Last

Gentoo Project Prefix

Another option I've been made aware of is Gentoo's Project Prefix. Essentially, it installs Gentoo (a distribution of Linux) inside a directory without root privileges. It doesn't work very well on Ubuntu, however due to this bug, but it should work on other systems.

They provide a bootstrap script that you can run that helps you bootstrap the system. It asks you a few questions, and then gets to work compiling everything required (since Gentoo is a distribution that compiles everything from source).

If you have multiple versions of gcc available, try telling it about a slightly older version of GCC if it fails to install.

If you can get it to install, a Gentoo Prefix install allows the installation whatever software you like!

pyenv

The last solution to the problem I'm aware of is pyenv. It automates the process of downloading and compiling specified versions of Python, and also updates you shell automatically. It does require some additional dependencies to be installed though, which could be somewhat awkward if you don't have sudo access to your system. I haven't actually tried it myself, but it may be worth looking into if the other 2 options don't work for you.

Conclusion

There's always more than 1 way to do something, and it's always worth asking if there's a better way if the way you're currently using seems hugely complicated.

Installing Python, Keras, and Tensorflow from source

I found myself in the interesting position recently of needing to compile Python from source. The reasoning behind this is complicated, but it boils down to a need to use Python with Tensorflow / Keras for some natural language processing AI, as Tensorflow.js isn't going to cut it for the next stage of my PhD.

The target upon which I'm aiming to be running things currently is Viper, my University's high-performance computer (HPC). Unfortunately, the version of Python on said HPC is rather old, which necessitated obtaining a later version. Since I obviously don't have sudo permissions on Viper, I couldn't use the default system package manager. Incredibly, pre-compiled Python binaries are not distributed for Linux either, which meant that I ended up compiling from source.

I am going to be assuming that you have a directory at $HOME/software in which we will be working. In there, there should be a number of subdirectories:

  • bin: For binaries, already added to your PATH
  • lib: For library files - we'll be configuring this correctly in this guide
  • repos: For git repositories we clone

Make sure you have your snacks - this was a long ride to figure out and write - and it's an equally long ride to follow. I recommend reading this all the way through before actually executing anything to get an overall idea as to the process you'll be following and the assumptions I've made to keep this post a reasonable length.

Setting up

Before we begin, we need some dependencies:

  • gcc - The compiler
  • git - For checking out the cpython git repository
  • readline - An optional dependency of cpython (presumably for the REPL)

On Viper, we can load these like so:

module load utilities/multi
module load gcc/10.2.0
module load readline/7.0

Compiling openssl

We also need to clone the openssl git repo and build it from source:

cd ~/software/repos
git clone git://git.openssl.org/openssl.git;    # Clone the git repo
cd openssl;                                     # cd into it
git checkout OpenSSL_1_1_1-stable;              # Checkout the latest stable branch (do git branch -a to list all branches; Python will complain at you during build if you choose the wrong one and tell you what versions it supports)
./config;                                       # Configure openssl ready for compilation
make -j "$(nproc)"                              # Build openssl

With openssl compiled, we need to copy the resulting binaries to our ~/software/lib directory:

cp lib*.so* ~/software/lib;
# We're done, cd back to the parent directory
cd ..;

To finish up openssl, we need to update some environment variables to let the C++ compiler and linker know about it, but we'll talk about those after dealing with another dependency that Python requires.

Compiling libffi

libffi is another dependency of Python that's needed if you want to use Tensorflow. To start, go to the libgffi GitHub releases page in your web browser, and copy the URL for the latest release file. It should look something like this:

https://github.com/libffi/libffi/releases/download/v3.3/libffi-3.3.tar.gz

Then, download it to the target system:

cd ~/software/lib
curl -OL URL_HERE

Note that we do it this way, because otherwise we'd have to run the autogen.sh script which requires yet more dependencies that you're unlikely to have installed.

Then extract it and delete the tar.gz file:

tar -xzf libffi-3.3.tar.gz
rm libffi-3.3.tar.gz

Now, we can configure and compile it:

./configure --prefix=$HOME/software
make -j "$(nproc)"

Before we install it, we need to create a quick alias:

cd ~/software;
ln -s lib lib64;
cd -;

libffi for some reason likes to install to the lib64 directory, rather than our pre-existing lib directory, so creating an alias makes it so that it installs to the right place.

Updating the environment

Now that we've dealt with the dependencies, we now need to update our environment so that the compiler knows where to find them. Do that like so:

export LD_LIBRARY_PATH="$HOME/software/lib:${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}";
export LDFLAGS="-L$HOME/software/lib -L$HOME/software/include $LDFLAGS";
export CPPFLAGS="-I$HOME/software/include -I$HOME/software/repos/openssl/include -I$HOME/software/repos/openssl/include/openssl $CPPFLAGS"

It is also advisable to update your ~/.bashrc with these settings, as you may need to come back and recompile a different version of Python in the future.

Personally, I have a file at ~/software/setup.sh which I run with source $HOME/software/setuop.sh in my ~/.bashrc file to keep things neat and tidy.

Compiling Python

Now that we have openssl and libffi compiled, we can turn our attention to Python. First, clone the cpython git repo:

git clone https://github.com/python/cpython.git
cd cpython;

Then, checkout the latest tag. This essentially checks out the latest stable release:

git checkout "$(git tag | grep -ivP '[ab]|rc' | tail -n1)"

Important: If you're intention is to use tensorflow, check the Tensorflow Install page for supported Python versions. It's probable that it doesn't yet support the latest version of Python, so you might need to checkout a different tag here. For some reason, Python is really bad at propagating new versions out to the community quickly.

Before we can start the compilation process, we need to configure it. We're going for performance, so execute the configure script like so:

./configure --with-lto --enable-optimizations --with-openssl=/absolute/path/to/openssl_repo_dir

Replace /absolute/path/to/openssl_repo with the absolute path to the above openssl repo.

Now, we're ready to compile Python. Do that like so:

make -j "$(nproc)"

This will take a while, but once it's done it should have built Python successfully. For a sanity check, we can also test it like so:

make -j "$(nproc)" test

The Python binary compiled should be called simply python, and be located in the root of the git repository. Now that we've compiled it, we need to make a few tweaks to ensure that our shell uses our newly compiled version by default and not the older version from the host system. Personally, I keep my ~/bin folder under version control, so I install host-specific to ~/software, and put ~/software/bin in my PATH like so:

export PATH=$HOME/software/bin

With this in mind, we need to create some symbolic links in ~/software/bin that point to our new Python installation:

cd $HOME/software/bin;
ln -s relative/path/to/python_binary python
ln -s relative/path/to/python_binary python3
ln -s relative/path/to/python_binary python3.9

Replace relative/path/to/python_binary with the relative path tot he Python binary we compiled above.

To finish up the Python installation, we need to get pip up and running, the Python package manager. We can do this using the inbuilt ensurepip module, which can bootstrap a pip installation for us:

python -m ensurepip --user

This bootstraps pip into our local user directory. This is probably what you want, since if you try and install directly the shebang incorrectly points to the system's version of Python, which doesn't exist.

Then, update your ~/.bash_aliases and add the following:

export LD_LIBRARY_PATH=/absolute/path/to/openssl_repo_dir/lib:$LD_LIBRARY_PATH;
alias pip='python -m pip'
alias pip3='python -m pip'

...replacing /absolute/path/to/openssl_repo_dir with the path to the openssl git repo we cloned earlier.

The next stage is to use virtualenv to locally install our Python packages that we want to use for our project. This is good practice, because it keeps our dependencies locally installed to a single project, so they don't clash with different versions in other projects.

Before we can use virtualenv though, we have to install it:

pip install virtualenv

Unfortunately, Python / pip is not very clever at detecting the actual Python installation location, so in order to actually use virtualenv, we have to use a wrapper script - because the [shebang]() in the main ~/.local/bin/virtualenv entrypoint does not use /usr/bin/env to auto-detect the python binary location. Save the following to ~/software/bin (or any other location that's in your PATH ahead of ~/.local/bin):

#!/usr/bin/env bash

exec python ~/.local/bin/virtualenv "$@"

For example:

# Write the script to disk
nano ~/software/bin/virtualenv;
# chmod it to make it executable
chmod +x ~/software/bin/virtualenv

Installing Keras and tensorflow-gpu

With all that out of the way, we can finally use virtualenv to install Keras and tensorflow-gpu. Let's create a new directory and create a virtual environment to install our packages in:

mkdir tensorflow-test
cd tensorflow-test;
virtualenv "$PWD";
source bin/activate;

Now, we can install Tensorflow & Keras:

pip install tensorflow-gpu

It's worth noting here that Keras is a dependency of Tensorflow.

Tensorflow has a number of alternate package names you might want to install instead depending on your situation:

  • tensorflow: Stable tensorflow without GPU support - i.e. it runs on the CPU instead.
  • tf-nightly-gpu: Nightly tensorflow for the GPU. Useful if your version of Python is newer than the version of Python supported by Tensorflow

Once you're done in the virtual environment, exit it like this:

deactivate

Phew, that was a huge amount of work! Hopefully this sheds some light on the maddenly complicated process of compiling Python from source. If you run into issues, you're welcome to comment below and I'll try to help you out - but you might be better off asking the Python community instead, as they've likely got more experience with Python than I have.

Sources and further reading

Art by Mythdael