Presented By O’Reilly and Cloudera
Make Data Work
21–22 May 2018: Training
22–24 May 2018: Tutorials & Conference
London, UK
Manas Ranjan Kar

Manas Ranjan Kar
Senior Manager, Episource

Website

I am currently leading the NLP & Data Science practice at Episource, a US healthcare company. My daily work revolves around working on semantic technologies and computational linguistics (NLP), building algorithms and machine learning models, researching data science journals and architecting secure product backends in the cloud.

Techstack that my team and I typically work on includes;

Language: Python
Testing Frameworks: unittest, pytest
Automation & Configuration Management: Ansible, Docker, Vagrant
CI: Travis CI
Cloud Services: AWS, Google Cloud, MS Azure
APIs: Bottle, CherryPy, Flask
Databases: MySQL, SQLite, MSSQL, RDF stores, Neo4J, ElasticSearch, MongoDB, Redis
Editor: Sublime, Pycharm

I have architected multiple commercial NLP solutions in the area of healthcare, foods & beverages, finance and retail. I am deeply involved in functionally architecting large scale business process automation & deep insights from structured & unstructured data using Natural Language Processing & Machine Learning. I have contributed to multiple NLP libraries like Gensim and Conceptnet5. I blog regularly on NLP on multiple forums like Data Science Central, LinkedIn and my blog Unlock Text.

I love teaching and mentoring students. I speak regularly on NLP and text analytics at conferences and meetups like Pycon India and PyData. I have also taught multiple hands-on session at IIM Lucknow and MDI Gurgaon. I have mentored students from schools like ISB Hyderabad, BITS Pilani, Madras School of Economics. When bored – I like to fall back on Asimov to lead me into an alternate reality.

Sessions

14:0514:45 Wednesday, 23 May 2018
Manas Ranjan Kar (Episource)
At Episource, we work on building Deep Learning frameworks and architectures to help summarize a medical chart, extract medical coding opportunities and their dependencies to recommend best possible ICD10 codes. This not only required building a wide variety of deep learning algorithms to account for natural language variations but also fairly complex in-house training data creation exercises Read more.