Skip to content

Latest commit

 

History

History
15 lines (7 loc) · 590 Bytes

README.md

File metadata and controls

15 lines (7 loc) · 590 Bytes

Toxic-Comment-Classification

We are provided with a dataset consisting of social media comments and their manual classification into Toxic, Severe Toxic, Obscene, Threat, Insult, Identity hate and Non-Toxic. The task is to create multi-class toxic comment classifier using:

  1. Different vectorization approaches studied in class

  2. Using the avg-AUC (Area under the ROC) metric to measure the efficacy of different models

  3. Hyperparameter tune the models

Screenshot (124)