Sr. Data Engineer

Mexico City, Mexico City, Mexico

Framework Science is on a MISSION that focuses on Exploring new technologies and building tomorrow's Applications. This means we hire TOP Engineers and Designers by providing great benefits and pay so they can focus on solving what's never been solved before. Our aim is to push the needle of innovation while enabling Technical staff to impact code or products at the architecture level. Work with very bright individuals and prosper economically for the value you bring to the table. Our culture is driven by putting our Engineers first with Work-life balance and an environment that sparks the imagination. Join us as we dare to Explore the Unknown!

Where do we come from?
Broadcom, Yahoo, Sony, Samsung, Thermo Fisher, Blackbaud, and many well-known tech companies

Your mission:
Manage and optimize Cloud Data Platform.

Must Have:
Technical Skills:
○ Strong programming skills. Must be proficient in
one of the following languages: Python / Scala / Java.
○ Must have working knowledge of Pyspark, Panda Data Frames, SparkSQL etc.
○ Working knowledge of messaging and data pipeline tools like Apache Kafka, Amazon Kinesis.
○ Must have experience developing APIs using frameworks like Flask/Django etc.
○ Experience with stream-processing systems: Apache Spark Streaming, Apache Storm etc.
○ Experience working in open table / in-memory table formats for huge analytics dataset: Iceberg, Parquet, Arrow, AVRO etc.
○ Experience in writing and understanding complex SQL queries.
○ Involved in developing at least one data pipeline
that involved collecting/streaming, storing and processing (ETL) the data for various business use cases.
○ Integral part of a team working with structured, semi-structured and unstructured large data sets from real time/batch streaming data feeds.

● Experience with AWS cloud services: EMR, Glue, Athena, RDS, Redshift.
● Have worked with data pipeline and governance tools: Airflow, Azkaban, Luigi etc.
● Experience working with NoSQL databases like Apache Solr, DynamoDB, MongoDB.
● Have knowledge of HDFS, Flume, Hive, MapReduce
● Nice to have worked in one of the data warehouse tools like AWS Redshift, Snowflake.

Education: At least a Bachelor’s degree in computer science or applied mathematics.