Paper Review - CartoonGAN
Studying image-to-image translation. Overview of 2018 CVPR paper "CartoonGAN- Generative Adversarial Networks for Photo Cartoonizations".
Why Transformer can achieve the same tasks as Bi-LSTM and Seq2Seq models?
A Seq2Seq model takes a sequence as input and produces a sequence as output. The input sequence and output sequence are not always in the same length. In order words, the length of the output sequence is determined by the model. In problems such as Machine Translation and Speech Recognition, a Seq2Seq model which has an encoder-decoder architecture is needed. The transformer can achieve such tasks because it also has an encoder-decoder architecture. The encoder processes the input sequence into hidden states, and it will provide information for the decoder to predict the output sequence. The transformer uses an Auto-regressive decoder, which uses the previously predicted output as input to generate new predicted output. As illustrated in the figure below, Transformer can determine the length of output and thus solve Seq2Seq problems like Speech Recognition.
Some Practice Questions on common Deep Learning architectures
Some Practice Questions on common Deep Learning architectures. The topics cover Autoencoders, CNN, RNN, GANs, Transformers.
ResNet and DenseNet
Resnet made a shortcut connections using Add, while DenseNet further exploits the effect of shortcut connections using Concat. Example code attached.
Some Practice Questions on DNN
Some Practice Questions on DNN. The topics cover gradient descent, optimization, loss functions, vanishing gradients problems and regularization techniques.
Revisit Basic Math of Machine Learning
Notes on Revisit GMM, SVM, and Basic Math of Machine Learning.
I made my childhood game in Unity!
Paper Warfare is a top-down combat game for the PC that developed with Unity. Players customize their own self-made paper aircraft, control them on the battlefield to duel with other aircraft creations from their opponents. Paper Warfare originated from a simple “Flick andHit” game which is inspired by collective primary school memories of Hong Kong teenagers playing paper-folded aircraft back in their childhood. In that real-life game, each player first uses a piece of paper to fold their own aircraft, then they put their self-made aircraft onto the desk, then flick their aircraft and try their best to knock out their opponent. The first aircraft that gets knocked out of the desk will lose the game. Our game is mainly based on this mini-game concept, as we wish to arouse people with those nostalgic memories.
I made a First Person Bomberman game in Unity!
This a VR Bomberman-like game application. We want to bring some new mechanics to the old games. because we found that there are many old games that have interesting gameplay, but slowly fade in people’s memory due to the decreasing popularity in Hong Kong. We think there are still many potentials in the old game such as Bomberman. In this project, we want to develop the hidden potentials of the game Bomberman, by changing from 2D top-down perspective to 3D first-person perspective. We believe such an interesting mechanic will make the game much more fun and hopefully regain the popularity of classic games like Bomberman in Hong Kong.
PyTorch Examples Collection
My collection of PyTorch example code. Most part are from Hung-yi Lee's Machine Learning Lecture.
Self-Attention and Transformer
My notes studying Self-Attention and Transformer. Most part are from Hung-yi Lee's Machine Learning Lecture.