Big Data Engineer (Experience in designing & developing a big data solution using Spark, Scala, AWS Glue, Lambda is a must) Location: Boston, MA (locals only) F2F interview. No h1 B Share DL, Visa and LinkedIn Responsibilities: Design and implement production big data environments using modern technologies. Build, validate, and optimize large-scale, big data solutions in heterogeneous data environments. Develop and manage ETLs in batch (Spark and Scala), and streaming (Apache Flink). To be successful in the role you must have Strong experience working with Scala, Spark, software design patterns, and TDD. Experience working with big data - Spark is a must - Hadoop, Hive, and/or Kafka would be a plus. Experience with different database structures, including SQL (Postgres, MySQL), and NOSQL (DynamoDB, DocumentDB, Redis, Elasticsearch). Experience and expertise across data integration and data management with high data volumes. Experience with Cloud ecosystems, AWS ecosystem (EMR, EC2, IAM, GLUE, ATHENA, S3, CloudFormation, LakeFormation, Redshift, DynamoDB, RDS, ECS & ECR) would be a great plus! Experience working in agile continuous integration/DevOps paradigm and tool set (Git, Jenkins, Sonar, Nexus, Jira, and Splunk). Bachelor's Degree in Computer Science, Information Systems, Mathematics, or other STEM, and related fields, or equivalent work experience. Professional working proficiency in English. Nice to have Experience with data warehousing, apache iceberg, and visualisation tools. Experience with NRT applications: Apache Flink, spark streaming.