Week 7 (30 Dec – 6 Jan) – Tkinter Canvas

A major drawback of the existing system for identifying the handwriting and differences are the image size, type and content. To mitigate that issue, the group decided that the input image of the algorithm will be created by the user on runtime. This is possible through the use of tkinter canvas. The user will first draw their numbers on the canvas, save it, exit, and immediately the algorithm will produce its prediction. This solved the issue of size as now the image’s size is predetermined by the canvas, eliminating issues due to size differences for the image similarity algorithm. The handwriting will also be clear and match the requirement of the MNist database comparison. A handwriting taken by a photo will not contain the best crop or color. The code provided by Nikhil Singh contains a tkinter canvas script that emulates the program paint. This is achieved by reading the mouses x and y position when it is on the canvas and if its clicked. However, the code does not come with a save feature. The group had problems implementing a save script as tkinter does not have a built-in save feature. The group at first used Image-Grabber to take a screenshot of the canvas based on its position on the screen. However, this method was inconsistent as the image will not cropped properly and is dependent on the canvas’ position on the computer screen. If the canvas is to the right of the computer screen, the screenshot will not include the canvas image. Furthermore, the canvas image became inconsistent in size, creating conflict when operating the image comparison code. To solve this issue, the group implemented the save feature which uses ghostscript to encode the canvas into a UTF-8 format, which will then be converted into a JPG output. 

Posted in Uncategorized | Comments Off on Week 7 (30 Dec – 6 Jan) – Tkinter Canvas

Week 6 (23 Dec – 30 Dec) – Image Comparison

After the handwriting has been predicted through the use of comparison to the MNist supplied data, the handwriting image will be compared to the image of the predicted number in a certain font. For example, if the number predicted is 3, the original input image will then be compared to the image of 3 in the font of Times New Roman. This is done by comparing the similarity between two images by calculating the SSIM (Structural Similarity Index) value. The code provided by Adrian Rosebrock produces the result of images visual differences and the SSIM value. OpenCV is used to identify exactly where in the x,y axis the image differences are. This will allow for the algorithm to map out where the differences in the images are. The SSIM is produced after both images (of equal size) are converted into grayscale. The conversion to grayscale is to simplify the conversion of color into bits. The difference is represented as floating points of 0s and 1s. The 0s representing the area where it is the same, while the 1s represents the area that is different. A bounding box will be generated based on the locations of where the difference exists, and the algorithm visualizes the differences. An issue that arises with this code is that the image of the input image and the compared image must be exactly the same size. The code will not function if there is a size difference, hence the input image size must either be in an environment controlled or resized. Resizing the input image is not ideal as it will pixelate and distort the original image. Hence the group must create a fixed and controlled environment to generate the image. Source: https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/

Posted in Uncategorized | Comments Off on Week 6 (23 Dec – 30 Dec) – Image Comparison

Week 5 (17 Dec – 23 Dec) – Code Explanation

Tensorflow will be implemented in the code as the main library to read and convert the text images into text. The code provided by Niek Temme will be the basis of our handwriting to text algorithm. The code uses the MNist database to train the model, which is then saved locally to eliminate the redundancy of training the model every time the code is executed. The model is then loaded into a python script, where the user will provide their handwritten text image to feed to the algorithm. The image will be then compared to those in the Mnist database; however, it is important that the dimensions of the image given is similar in size to the one existing in the MNist database. The code does not provide the image size conversion, hence this a task that the group shall accomplish. Predint() will be the main function to produce a prediction based off the image given to the algorithm. 

Source: https://niektemme.com/2016/02/21/tensorflow-handwriting/

Posted in Uncategorized | Comments Off on Week 5 (17 Dec – 23 Dec) – Code Explanation

Week 4 (9 Dec – 16 Dec) – Handwriting Algorithm Explanation

From our understanding, the code uses three neural networks to recognize handwriting, Convulational neural networks (CNN), recurrent neural networks (RNN) and a Connectionist Temporal Classification. The neural networks identify the letters of words by characters; hence it will identify other words that are not in the word data set as long as it is written neatly. Firstly, the image is fed into the CNN algorithm which is used to identify the relevant parts of the fed image. The output of this step is a downsized image with the feature map added. This output is then fed into the RNN to identify the relevant information of the sequence. Longer Short-Term Memory (LTSM) implementation of RNN is used to generate information through longer distance, leading to a more accurate finding. The RNN will output a matrix which will then be used by the CTC which will use the ground truth text to compute the loss value. The CTC will decode the final text through the matrix given by the RNN. In context of the code implementation:

  1. Input image of grey-scale 128 x 32 is inputted (image may need to be rescaled beforehand)
  2. The image is copied into a white target image of 128 x 32 to properly fit and scale the image
  3. The CNN will generate the feature map based on the input image
  4. The RNN will output based on the features with high correlation with characters based on comparison to the dataset. 
  5. It compares the features to the character list data set

It will fill in the blanks based on predictions, based on the most likely character to follow the previous characters. 

However,

Source: https://towardsdatascience.com/build-a-handwritten-text-recognition-system-using-tensorflow-2326a3487cd5

Posted in Uncategorized | Comments Off on Week 4 (9 Dec – 16 Dec) – Handwriting Algorithm Explanation

Week 3 (1 Dec – 8 Dec) – Installing Tensorflow

While trying to run the code, the group ran with various problems regarding tensorflow. The first wall we hit was that we had difficulties installing the library with our current version of Python. Instead of using a python system environment, the group decided to use Anaconda to create a virtual environment. Tensorflow seems to have a compatibility issue with the latest version of Python 3, hence we created a downgraded Python 3 environment to try install Tensorflow. We successfully installed tensorflow with Anaconda, however we still faced issues executing the code. It turns out that tensorflow had dependencies problem depending on its version, hence higher versions of tensorflow does not support a code that was written with a previous version of tensorflow. After realizing this, we downgraded tensorflow and successfully ran the code. Our solution is as followed:

We used the Anaconda prompt to install tensorflow

  1. conda create -n tensorflow_env python=3.6
  2. activate tensorflow_env
  3. Pip install tensorflow==1.12
  4. Pip install opencv-python
  5. Pip install editdistance
  6. Pip install matplotlib
Posted in Uncategorized | Comments Off on Week 3 (1 Dec – 8 Dec) – Installing Tensorflow

Week 2 (24 – 31 October) – Researching Handwriting to Python Code

The group decided to use Python 3 as the coding language for this project as it is a language, we are most familiar with and also has libraries supporting artificial intelligence coding. Since we are not familiar with handwriting to text conversion and image detection, we decided to use a python code as our base to develop a further understanding on how the algorithm works and how the python libraries are implemented. We found a code provided by Harald Scheidl’s GitHub. The code uses Tensorflow v1.12 library as its primary basis to read the handwriting as well as gauge its similarity and likeliness. This was the only python code we found that was the most accurate as there are not a lot of python handwriting to text code

Posted in Uncategorized | Comments Off on Week 2 (24 – 31 October) – Researching Handwriting to Python Code

Week 1- Intelligent System Project Weekly Progress

Group Members:

  • Dumac Revano Chen
  • Naufal Muhammad Zavier
  • Muhammad Andi Yusuf

17 – 23 October(1st week)

In the first week, our group researched ideas to conceptualize and create a project that would help meet the criteria and demand of the project. Before researching or assigning any roles, the group brainstormed a few ideas to get a sense of direction. The first few ideas that were conceived were related to gaming artificial intelligence such as capsa or chess. However, we wanted to challenge ourselves and create something other than a game. One of the other ideas were uncensoring censored images as well as imposing an image to another image. While the ideas are original, the group thought it would be difficult due to our inexperience and the time limit. The group settled on creating a handwriting to digital text conversion, but found out it was not unique. Ultimately, the group decided to create a handwriting to digital text conversion program that gauge the accuracy of the person’s handwriting by comparing to a certain font. The application for this would be to for people to try to write in a certain font or to use the program to fix their handwriting.

Posted in Uncategorized | Comments Off on Week 1- Intelligent System Project Weekly Progress

Sniffing Using Kali Linux TCPdump

Sniffing is an action where a user can eavesdrop on the activities and network traffic of the target. Sniffing can be executed by using the tcpdump command in Kali Linux. Using two Kali Linux Virtual Machine (VM), it is possible to test this sniffing method. The first VM is going to be used as the target while the other VM is used to attack the target VM. This tutorial will be using Virtual Box in Windows 10 and running kali-linux 64bit.

These are the steps to sniff a computer’s internet activity:

  1. On the Virtual Box (Vbox) settings, configure both Kali Linux VM Network settings from NAT to Bridged Adapter. 
  2. Under the Network Settings of Vbox, click on Advanced and reset the MAC address of both VMs. 
  3. Confirm the changes to network settings  on both VMs. These edits are to ensure that the Kali Linux does not have the same IP address as one another.
  4. Afterwards connect your device to either a Wifi Adapter, hotspot or a network which is compatible Bridged Adapter.
  5. Once you run both Virtual Machines, in the target VM, open terminal and type the command ifconfig and extract the target VM’s IP address.

    ifconfig command

  6. In the attacking VM, open the terminal in input the command tcpdump -w eth0 host [target ip address] -w filename.pcap. The tcpdump command work similarly to WireShark as it can detect network traffic of a device. The command can capture how many request it Got.
    • For example: tcpdump -w eth0 host 192.168.31.34 -w testdump.pcap
  7. On the targeted VM, browse through the internet and open up websites you normally browse such as Google, Twitter, etc.
  8. After browsing for a designated amount of time, stop the tcpdump command by pressing Ctrl + C on the terminal.
  9. Open the filename.pcap file on your root folder.

Result:

Based on the result, the command was able to track how many packets it has captured when the target browse through the internet, but however it did not capture the specific activities the target has browsed on their browser.

Posted in Uncategorized | Comments Off on Sniffing Using Kali Linux TCPdump

Cancelling Cancer Contribution

Cancelling Cancer Contribution

Contributions

I am responsible for both the game design and executing the game play mechanic. Since I am in charge of the game play, I am responsible for instance variables and behavior of the red blood cell object, the spawning mechanics of the red cell objects, stats generator, scoring, the different game modes and three of the available power ups in the game.

Red Blood Cells

I was responsible for the properties for the red blood cells such their instance variables and their behaviors.

Spawning

Spawning is essential in the game play mechanic as it sets the difficulty level for the game. For all game modes, the cell is spawned randomly on the x-axis from points 10 to 860, and point 0 on the y-axis for every 2 seconds.

I designed the spawning mechanics for both game modes, Endless and Timed. The difference is that in Endless the spawning is increases by one for every 30 seconds, while in Timed there are 4-5 cells spawning at once for every 2 seconds.

Stats Generator for Red Blood Cell Object

To identify whether or not the red blood cells are cancerous or not, the instance variables are manipulated upon the object’s first spawn.

On created, the red blood cell instance variable MutateRate is randomized from rounded values of 1 to 100. The MutateRate randomized value dictates how the stats (Size, Damaged, and Growth Rate) are randomized. Red blood cells with MutateRate of 25 and below are considered to be cancer cells, while greater than 25 are considered to be a normal cell.

If a red blood cell has a MutateRate of below 25, the instance variable for Size will be randomized from 1 to 25, and 70 to 100 using the formula random (1, 25) + (random (70, 75) * random (0, 1)), while Damaged is set to “Damaged” and Growth Rate is randomized from 70 to 100.

If a red blood cell has a MutateRate of above 25, the instance variable for Size will be randomized from 26 to 69, Damage will be set to “Healthy” while Growth Rate value is randomized from (1, 69)

This is to reflect the fact that cancer cells have abnormal sizes compared to regular cells, as they could be significantly smaller or larger than average. A cancer cell tends to not repair themselves, hence they are represented by the stats of “Damaged”. The growth rate of cancer cells is also faster than those of regular cells, hence why the growth rate is larger value for cancer cells.

Drag and Drop

The main gameplay of this project is dragging and dropping the cells into the appropriate zones, so this section explains the mechanics of the Drag and Drop behavior. My contribution was to make the experience smooth while dragging the cells, but also to prevent them from exploiting the object’s behavior of collision and solid state. For example, making it still possible for players to drag the cells under the effect of freeze and also disabling collision on drop to prevent unnecessary bounce.

Accept and Reject (Scoring)

Accept and reject is the essentially the scoring system for the game; you accept normal cells and reject the cells that has become cancer cells. My responsibility was to ensure scoring system is flawless and contain no errors as a buggy scoring system makes the game unplayable

To check whether or not the cells has been dropped in the correct area or not is judged by the collisions between RedCell sprite and AcceptSprite and RejectSprite. These invisible sprites are positioned in the bottom of the layer as the players are not to see these sprites. The AcceptSprite is positioned to the left side while the RejectSprite is positioned to the right side. Depending on the cell’s mutate rate, the collisions will trigger either the scoring or damaging the player’s life points.

Power Ups (Slow, Freeze, Cell Wall)

There are three powerups in the game to give players incentive to play the game more frequently and be rewarded with skills that can help aid players to score higher points in the game.

I was responsible for designing and executing three of the power ups available, which is Cell Wall, Freeze and Slow.

Cell wall spawns a wall of cells that reflects oncoming cells for 5 seconds to push cells away from the wrong zones. This is achieved by setting the visibility and state of the cell wall to visible and solid for 5 seconds so that on collision the red cells bounce off the cell wall.

Freeze freezes all the red cells currently on screen for 10 seconds, but they can still be drag and dropped when they are clicked. This is done by disabling the red cell’s Bullet behavior for 10 seconds.

Slow reduces the speed of the red cells significantly for 10 seconds by reducing the acceleration and gravity of the red cell object.

Endless and Timed Mode

There are two game modes in the game, Endless and Timed. I was responsible for both game modes while the main difference between Endless and Timed are it’s spawning there were other differences as well. I was responsible for the way the timer worked, which dictated how the two game modes are different and also the scoring for timed mode.

 

 

 

 

 

 

 

 

 

Posted in Uncategorized | Comments Off on Cancelling Cancer Contribution

Cancelling Cancer – Game Guide

Game Guide

Name Of The Game:

Cancelling Cancer a game by Naufal Muhammad Zavier, Jason Christopher Chandra and Bryan Moses Weku.

 

Screenshot:

 

Figure 1: Endless Mode Gameplay

Figure 2: Game Over Screen

Figure 3: Timed Mode with Freeze Active

Figure 4: Endless Mode with Combo Multiplier Active

How To Play:

 

There are two modes in the game, Endless and Timed. In Endless mode, there is no time limit and the way you earn money is different from the timed mode. (Explained in the Scoring System section). In Timed mode, the player must earn as much points as they possibly can in 30 seconds.

 

Players will be faced with a horde of red blood cells that all look identical. However, as the player hovers their mouse over the red blood cells, a stats page will appear. Stats displayed in green text indicates that the red blood cell is a healthy red blood cell, while red the indicates the red blood cell has a mutated into a cancer cell.

 

Players use their mouse to drag the healthy red blood cells indicated by a green text to the accept zone and the dangerous cancer cells to the reject zone. If players drags the cells in the wrong zone, the players will get their live reduced.

 

Each round, players have a chance to earn money which can be used to buy special power ups which help make the game easier, power-ups include: Freeze (Freezes all cells for 10 seconds),Slow (Slows down all cells for 20 seconds), Cell Wall (Creates a wall to block cells for 5 seconds),and Destroy (destroys all cells instantly). Players can only use one skill at a time and there is a cooldown period for each skill.

 

In endless mode the spawn start with one red cell every 2 however, the spawning will increase every 30 seconds to make the mode progressively difficult. In Timed mode, every 2 seconds there are 3 cells spawned at once instead of one to make the game mode challenging as it is a short game session.

 

Player Controls:

  • Space: Used to pause the game.
  • Mouse: Used to drag the healthy red blood cells into the accept zone and the dangerous cancer cells to the reject zone.
  • Escape key: Used to retry or return to main menu.
  • Z, X, C, V keys: To use the skills for Cell Wall, Freeze, Slow and Destroy respectively.

Scoring System:

 

This game’s scoring system is based on the number of red blood cells the player accepts and the number of cancer cells the player rejects (Number of cancer cells rejected + Number of red blood cells accepted = points).

Endless mode has a combo multiplier feature which increases the score the players gain if they are on a streak.

 

A player activates a streak if they correctly put a cell in the right zone five times in a row. The combo multiplier increases for every five streak the player achieved. The first five streak will double the score gained, the next continuous five streak will quadruple the score gained, while fifteen streaks and onwards multiplies the score gained by eight times

 

A streak is lost if the player puts a cell in the incorrect zone, resetting the streak and the combo multiplier. The player also has a limited life with only 5 chances, and for every mistake the player makes, they will lose 1 life point.

 

If the player loses all of their life points, they lose the game. At the end of the game, players will also earn money which can be used to buy power ups and help make the game easier.

 

The way money is earned are different for each game mode, in Endless mode, The money earned is the number of points the player achieved plus the amount of time the player lasted. While in timed mode, the amount of money earned is just the number of points the player achieved, however in Timed Mode, the money earned is more than in Endless Mode, but the combo multiplier feature is not enabled in timed mode. In time mode the player does not gain any money if they have lost all their lives.

 

Contributions:

Naufal:
– Endless And Timed Mode
– Drag and drop
– Accept and Reject
– Power up for slow, freeze and cell wall
– Stats generator for Red Blood cell

Jason:
HUD design for the powerups (power up icons and quantity)
– Destroy Powerup
– Pause button
– Shop design and layout and purchase
– Game Over screen

Weku:
– Provides the sprites that are used in game
– Animations for the red blood cell and the clock
– Tutorial Page
– Credits Page
– Scoring and combo multiplier
– Sound design

 

Items We Created:

  • Red cell Animation
  • Cell Wall Sprite
  • Background
  • Freeze Effect Sprite
  • Shield Effect Sprite
  • Animated Clock sprite

 

Items that we implemented:

Sounds

Images

 

Posted in Uncategorized | Comments Off on Cancelling Cancer – Game Guide