Position : Hadoop consultant
Duration : Long term
Location : Montreal,QC
We are looking for an experienced infrastructure engineer who enjoys building large scale data systems and has previous knowledge of working with one of more Hadoop distribution systems. Responsibilities:-
Evaluate Hadoop infrastructure requirements and design/deploy solutions (high availability, big data clusters, elastic load tolerance, etc.)
Develop automation, installation and monitoring of Hadoop ecosystem components; specifically: Spark, HBase, HDFS, Map/Reduce, Yarn, Oozie, Pig, Hive, Impala, Kafka, Accumulo
Dig deep into performance, scalability, capacity and reliability problems to resolve issues
Create productive models for integrating internal application teams and vendors with our infrastructure
Troubleshoot and debug Hadoop ecosystem run-time issues
Provide developer and operations documentation
Run Proof of concept projects with our customers
Required Skills:
Proficiency in one or more of Java, Python, SQL, Scala Experience with Hadoop eco-system Knowledge of working with Cloudera, MapR or HDP distribution of Hadoop Proven experience building and scaling out Hadoop based or Unix hosted database infrastructure for an enterprise (software, network, storage and related) 2+ years
Experience of Hadoop administration or a strong and diverse background of distributed cluster management and operations experience 2+ years of Chef, Puppet, Ansible, etc. system configuration experience 2+ years of Devops or System Administration experience writing software in a continuous build and automated deployment environment. Open source experience is a plus (a well curated blog, upstream accepted contribution or community presence) Preferred Skills Experience with ETL and BI tools Kerberos experience a plus CM_API experience using python/java API
Thanks & Regards
Prasanth
678-740-6857
[email protected]