Body Shape Analyzer

At the age of 13, I found that most magazines described women’s body types based on proportions. However, what someone may call ‘wide hips’ may not be wide enough to another person and this can lead to a lot of confusion. Here is how I provide a more concrete method of analyzing women’s body types.

Principle

Women’s body types are broadly classified as 7 types. They are:

  • Rectangle – Similar measurements for all sections of the body.
  • Triangle – Large hips and a small bust.
  • Inverted Triangle – Broad shoulders and small hips.
  • Spoon – Large hips than shoulders.
  • Hourglass – Hips proportionate to shoulders. Small waist.
  • Top Hourglass – buttocks larger than hips.
  • Bottom Hourglass – Bust proportionate to waist.

Ideology

Using measurements of a person’s body, determine the proportions of the person’s body to determine their body type. The measurements to be taken as parameters are:

  • Bust
  • Waist
  • Hips

Device a formula for easy detection of each body type.

I came up with very crude formulas back then, in 2007. Now I decided to update the idea, by searching for more resources. Surprisingly, research has already been conducted to determine women’s body types using formulas. I have incorporated them in my code.

Improved Method

Lee JY, Istook CL, Nam YJ wrote a paper in which they proposed the following formulas for each body type:

Hourglass

(bust − hips) ≤ 1″ AND (hips − bust) < 3.6″ AND (bust − waist) ≥ 9″ OR (hips − waist) ≥ 10″

Bottom hourglass

(hips − bust) ≥ 3.6″ AND (hips − bust) < 10″ AND (hips − waist) ≥ 9″ AND (high hip/waist) < 1.193

Top hourglass

(bust − hips) > 1″ AND (bust − hips) < 10″ AND (bust − waist) ≥ 9″

Spoon

(hips − bust) > 2″ AND (hips − waist) ≥ 7″ AND (high hip/waist) ≥ 1.193

Triangle

(hips − bust) ≥ 3.6″ AND (hips − waist) < 9″

Inverted Triangle

(bust − hips) ≥ 3.6″ AND (bust − waist) < 9″

Rectangle

(hips − bust) < 3.6″ AND (bust − hips) < 3.6″ AND (bust − waist) < 9″ AND (hips − waist) < 10″

Implementation in Python

The demo app is available on Google Play:

google play store

Face Shape Detection

Around age 14, I attempted to come up with a way to accurately detect the shape of a person’s face. Right now beauty experts use loose guidelines such as the width of the forehead, or the sharpness of the chin to roughly classify a face shape. Here is my process of determining the shape of a person’s face.

Principle

The human face can be broadly classified into 4 major types. Ofcourse there are plenty of variations because each person is unique. The major face shape categories are:

  • Square
  • Heart
  • Oval
  • Round

Method

Create a model which is trained using celebrity images for each face shape. These will act as benchmarks.

Square Face:

  • Billie Piper
  • Demi Moore
  • Jennifer Aniston
  • Katie Holmes
  • Kelly Osborne
  • Renee Zellweger
  • Sandra Bullock

Heart Face:

  • Cheryl Cole
  • Eva Longoria-Parker
  • Gwyneth Paltrow
  • Mary-Kate Olsen
  • Naomi Campbell
  • Nicole Richie
  • Reese Witherspoon

Oval Face:

  • Cameron Diaz
  • Charlize Theron
  • Cindy Crawford
  • Courtney Cox
  • Elle Macpherson
  • Julia Roberts
  • Rihanna

Round Face:

  • Drew Barrymore
  • Ginnifer Goodwin
  • Hayden Panettiere
  • Isla Fisher
  • Kate Bosworth
  • Mila Kunis

Finally, feed the user’s image into the newly trained model. The shape with the highest probability is the user’s face shape.

Implementing Deep Learning

Neural Network Used: CNN

Neurons in output layer: 4

Test

The demo app can be found on github:

https://github.com/VinitaSilaparasetty/face_shape_analyzer

Dataset

To learn more about the dataset, click the button below:

References:

Square face shape:

https://www.cosmopolitan.com/uk/beauty-hair/celebrity-hair-makeup/g1646/celebrity-face-shapes-square-99685/

Heart face shape:

https://www.cosmopolitan.com/uk/beauty-hair/celebrity-hair-makeup/g1642/celebrity-face-shapes-heart-99644/

Oval face shape:

https://www.cosmopolitan.com/uk/beauty-hair/celebrity-hair-makeup/g1644/celebrity-face-shapes-oval-99666/

Round face shape:

https://www.cosmopolitan.com/uk/beauty-hair/celebrity-hair-makeup/g1645/celebrity-face-shapes-round-99677/

Random Outfit Generator

I already explained my idea of creating a wardrobe organizer using a CNN.

I then expanded on the idea. Most women wake up and wonder what to wear for the day. Why not use python to generate a random outfit generator using the images uploaded by the user?

Ideology

Randomly generate an outfit based on the weather. This is to avoid suggesting inappropriate combinations such as gloves in summer.

Methodology

  1. The app will randomly select articles of clothing and create a complete outfit. It will display :
  • 1 top (shirt,blouse,tshirt etc.)
  • 1 bottom (skirt,jeans,trouser etc.)
  • 1 pair of shoes (heels,boots etc.)

2. In order to achieve this task, I would have to first categorize the clothing into classes based on the weather.

To do this, I require three models:

  • Clothing Category Model: This model will detect what type of clothing is in the image.
  • Fabric Detection Model: This model will detect the fabric used in the garment. This is very important for clothing such as blouses. If a blouse is made of wool it will be more appropriate for winer.
  • Weather Category Model: This model takes the outputs of the previous two models, i.e the type of garment and fabrication of the garment, to decide which weather the garment can be worn in.

3. Finally, I need to randomize the selection that the app makes for each item based on weather data.

Implementation of Deep Learning

Clothing Category Model

I had created a CNN model for the wardrobe organizer app. It was meant to categorize clothing into 10 basic classes. Then, I made the following modifications to increase the ability of the model:

  • Increased the number of neurons in the output layer to 50. The original had only 10. Now that the model can distinguish more categories of clothing, it is easier to tell which ones can be worn for which season. Eg: Shorts in summer.
  • The original model contains 5 layers in total. I added 2 extra hidden layers. This increases the accuracy for the model.

Fabric Detection Model

Now I need to create a CNN that can identify the fabrication of a garment.

Total number of layers: 6

Total number of neurons in the output layer: 1,000

Weather Category Model

Total number of layers: 5

Total number of neurons in output layer: 3

Implementation in Python

The input image goes through the following process:

Clothing category model> Fabric category model>Weather category model> Python randomizer

The Demo App is available on Google Play:

Dataset

Name: Deep Fashion

Citation

@inproceedings{liuLQWTcvpr16DeepFashion, author = {Liu, Ziwei and Luo, Ping and Qiu, Shi and Wang, Xiaogang and Tang, Xiaoou}, title = {DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations}, booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2016} }

Wardrobe Organizer

At the age of 13, I was introduced to Polyvore. A cool website which allowed user’s to put together outfits in any way they liked. I loved it! It gave me the idea to create an app which worked in a similar way. This idea blossomed in my mind when AI was not yet as advanced as it is now. Later on in this post, I will cover how I have updated the concept using Deep Learning.

Ideology

Enable women to experiment with their clothing virtually and come up with new combinations, rather than wasting hours actually changing into different clothing.

Method

  1. The user takes images of their clothing. It does not have to be artistic, it could even be a shot of just one detail of the garment (for example a beaded collar). The objective is for the user to be able to recognise the garment later on.
  2. The user then selects the type of clothing they just took a picture of from the list displayed on the screen.
  3. The user can then retrieve the images later.

I was unhappy with the method above because it is not user friendly. There is too much effort that the user has to put in. I needed to simplify the process. Cue, the magic of Deep Learning.

Applying Deep Learning

Now, I came across a dataset known as the “Fashion MNIST“. This is the solution to my problem! By designing and training a model using the Fashion MNIST dataset, I can now have the app do the sorting for the user. All the user has to do is take images of garments in their wardrobe.

For more information on Fashion MNIST, click the button below:

Implementation in Tensorflow 2.0 with Keras

ANN Used: Convolutional Neural Network

Target Accuracy: 90% to 95%

Accuracy Achieved: 89.5%

The demo app is available on Google Play:

Skin Type Analysis Using Thermal Imaging

Around age 15, like most teenage girls, I was dealing with pimples. It puzzled me because my skin had always been on the dry side before the breakouts began.With a little more research, I discovered that I had combination skin. This means some areas of my skin were oily, while other parts were dry. So, each area needed to be treated differently. Now the challenge is to identify which areas are oily and which areas are dry.

Fast forward to 2016, we have the power of AI. Here’s how I use AI to solve the problem.

Principle

1. The human body releases heat. This heat has an infrared signature, which can be detected by infrared cameras.

thermal image with scale

2. The regions in the thermal image which have red yellow and green are the warmer regions, with red being the hottest. While, the regions which have light blue, dark blue, purple and black are the cooler regions, with black being the coldest.

3. Combination skin is characterized by an oily forehead, nose, and chin and relatively dry cheeks.

4. “Excessive production of subdermal fat, modifies the dynamics of the bloodstream, and consequently temperature. A high percentage of this heat interchange is manifested as electromagnetic radiation with far-infrared wavelengths, which can be captured through a thermal imaging camera. “

https://www.ncbi.nlm.nih.gov/pubmed/24777529

Theory

Based on the research paper, I deduced that the thermal signature of oily skin will be higher than that of the surrounding skin. This will help me detect oily patches of skin.

Methodology

  • Identify features of the face to detect the location of oily skin.
  • Check for high temperatures which will be either red or orange according to the thermal image scale.
  • Return names of the regions of the face that are ‘oily’.

Implementation in Python

Test

I do not have access to a thermal camera, so I had to simulate a thermal image using a thermal image effect. Then, I used the app to analyze it.

Thermal Image

Output:

Nose

Dataset:

To learn more about the dataset, click the button below:

References:

To view the research paper by Padilla-Medina JA, León-Ordoñez F, Prado-Olivarez J, Vela-Aguirre N, Ramírez-Agundis A and Díaz-Carmona J, click the button below:

Skintone Analyzer

In the Skin Undertone Detection Project, I mentioned that there was no concrete way to analyze skin and accurately detect the undertones. Well a similar problem occurs with the skintone. Especially for girls with naturally tan skin. Here is how I would implement a skintone analyzer using Artificial Intelligence.

Principle:

An individual’s skin pigmentation is the result of genetics, being the product of both of the individual’s biological parents’ genetic makeup, and exposure to sun. The types of skintone are classified as:

  • Type I: Light skin
  • Type II: Fair skin
  • Type III: Medium skin
  • Type IV: Olive skin
  • Type V: Tan brown skin
  • Type VI: Rich brown skin

Ideology:

  1. Create a unique color scale with human skintones.
  2. The user takes an image of their wrist just below their palm.
  3. Match the color of the image with the unique color scale.

Implementation in Python

The demo app is available on Google Play:

Dataset

Name: Skin Dataset

Data Set Information:

The skin dataset is collected by randomly sampling B,G,R values from face images of various age groups (young, middle, and old), race groups (white, black, and asian), and genders obtained from FERET database and PAL database. Total learning sample size is 245057; out of which 50859 is the skin samples and 194198 is non-skin samples. Color FERET Image Database: [Web Link], PAL Face Database from Productive Aging Laboratory, The University of Texas at Dallas: [Web Link]

Attribute Information:

This dataset is of the dimension 245057 * 4 where first three columns are B,G,R (x1,x2, and x3 features) values and fourth column is of the class labels (decision variable y).

Skin Undertone Detection

At the the age of 13, I noticed that most fashion magazines would create a quiz or just provide a few celebrity pictures to help women determine the undertone of their skin. However, this method is not accurate. There is a lot of scope for human error. So, I wanted to create an app that could analyze the undertone of skin and get the most accurate results possible. Here’s how I did it.

Principle Used

An undertone is the color from underneath the surface of the skin that affects its overall hue. By observing the color of the veins at the wrist, just below the palm, one can identify the skin’s undertone.

Red = Warm undertone

Blue = Cool undertone

Green = Neutral undertone

Keeping this fact in mind, I decided to create an android app that works in the following way:

Method

  1. The user takes a close up picture of their wrist, just below the palm.
  2. The app analyzes the picture and measures the amount of red, blue and green tones in the image.
  3. If the image has:
  • A higher content of red, the user has a warm undertone;
  • A higher concentration og green, the user has a neutral undertone;
  • A higher content of blue, the user has a cool undertone.

The first version of the app was implemented using Java in Android Studio. You can view the app on the Google Playstore by clicking the button below:

Python Implementation

First, reprogrammed the Java version in Python.

Applying Machine Learning

With the recent advancements in AI, I decided to polish my original idea and make my app smarter by using KMeans clustering.

Code

Dataset

Name: Skin Dataset

Data Set Information:

The skin dataset is collected by randomly sampling B,G,R values from face images of various age groups (young, middle, and old), race groups (white, black, and asian), and genders obtained from FERET database and PAL database. Total learning sample size is 245057; out of which 50859 is the skin samples and 194198 is non-skin samples. Color FERET Image Database: [Web Link], PAL Face Database from Productive Aging Laboratory, The University of Texas at Dallas: [Web Link].

Attribute Information:

This dataset is of the dimension 245057 * 4 where first three columns are B,G,R (x1,x2, and x3 features) values and fourth column is of the class labels (decision variable y).

Virtual Reality Headset

Applicant Name: Vinita Silaparasetty

Patent ID: 307907

Patent Type: Design Patent

Patent Status: Patent Pending

Date of Application: 20-07-2018

Category: Technology

Sub-Category: Wearables

Product Description:

A headset with an additional, adjustable, visor which functions as the virtual reality component.

Product Highlights:

Can be used like a regular set of headphones by putting the visor up. Then when required, the visor can be pulled down over the eyes and it will function like a virtual reality headset.

Product Benefits:

  • Lightweight
  • Stylish
  • Dual Purpose

Use Cases:

  • Gaming/Recreation
  • Pilot Training
  • Veteran Training
  • Medical Training
  • Classroom Teaching Aid

VR Headset – Patent Pending

GPS Handbag

Type: Highschool Prototype Development Project.

Category: Wearables Project

Description: A gps enabled handbag to track lost or stolen handbags. The gps module can be sewn into the bag. An android app is used to track the location of the bag.

Applications: Women’s Handbags and Children’s school backpacks.

This is an idea I had back when I was 13, so it was the year 2007. I wanted to create a solution that could ensure women’s safety. That’s when I thought of the “GPS Handbag”. Then in 2013 I have updated the idea and brought it to life. Here’s how I did it.

Hardware

gps handbag circuit

Components Used:

1) Adafruit GPS FLORA board.

2) Adafruit GPS Module.

3) Coin Cell Battery.

4) Conductive thread.

Android Application:

gps handbag app

Software Used:

1)Kivy

2)Python 3.7

Python Code:

Handbag Design:

gps handbag structure

The handbag itself is an envelope clutch, adorned with delicate embroidery.

Prototype designed and created by Vinita Silaparasetty.

The afore mentioned hardware components are sewn into the interior of the handbag to minimize the chances of tampering.