We are looking for someone to join the team to help design and implement a Big Data eco system using Hadoop to facilitate data integration, testing and delivery to our data testing system.
- 6+ years’ work experience in the field of computer science includes Big Data Hadoop infrastructure and development work.
- Hands on experience in Hadoop eco system including HDFS, Spark, Hive Oozie, Sqoop, Flume and Zookeeper.
- Working experience with Spark streaming and Spark SQL.
- Working experience in Python.
- Working experience under Linux environment.
- In depth understanding on Hadoop Architecture and workload management, schedulers and scalability options.
- Apache Zeppelin notebook experience will be a strong plus.
- Possess strong trouble shooting and problem solving skills.
- Ability to work independently and as a member of a team.
- • Strong verbal and written communication skills.