Full Time
$2000 to $2500 per month, depending on experience
40
Mar 26, 2025
IMPORTANT: If you are interested and qualified, please submit a link to your resume AND your voice recording to
About the Role
We are seeking a highly skilled Data Engineer to design, build, and maintain our data infrastructure, with a strong emphasis on data integration, architecture, and pipeline development. The ideal candidate will have expert-level SQL and Python skills and a deep understanding of data modeling, ETL processes, and system integrations.
This role requires a strong technical foundation, problem-solving skills, and the ability to work across teams to enable data-driven decision-making. You will play a crucial role in optimizing data flows and ensuring efficient data storage, access, and processing.
This is a full-time remote position, and the candidate should be comfortable working in either the Pacific Standard Time (PST) or Eastern Standard Time (EST) timezone
Key Responsibilities
Data Integration & Pipeline Development: Design and implement scalable ETL/ELT pipelines to integrate data from multiple sources into a centralized data platform.
Data Architecture: Develop and maintain robust data models, schemas, and warehouse structures to support analytics and business intelligence needs.
SQL & Database Management: Write and optimize complex SQL queries for data transformation, extraction, and performance tuning.
Python Development: Build and maintain automation scripts, data transformation processes, and API integrations using Python.
Scalability & Performance Optimization: Ensure data systems are optimized for performance, reliability, and scalability.
Collaboration: Work closely with data analysts, data scientists, and software engineers to support data needs across the organization.
Monitoring & Debugging: Implement logging, monitoring, and alerting solutions to ensure data integrity and pipeline health.
Qualifications & Experience
5+ years of experience in data engineering, software development, or a related field.
Expert proficiency in SQL, including performance tuning, query optimization, and database management.
Expert-level Python skills, including experience with data processing libraries (e.g., Pandas, PySpark, SQLAlchemy).
Strong experience with data integration across APIs, third-party services, and internal systems.
Deep understanding of data architecture principles, including data modeling, warehouse design, and distributed systems.
Experience working with cloud-based data platforms (e.g., AWS, GCP, Azure, Snowflake, BigQuery, Redshift).
Familiarity with workflow orchestration tools (e.g., Apache Airflow, Prefect, Dagster).
Experience with version control (Git), CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus.
Hands on working with ELT/ETL tools along with API based ingestion using code.
Preferred Skills
Knowledge of streaming data technologies (e.g., Kafka, Kinesis, Pub/Sub).
Familiarity with data governance, security, and compliance best practices.
IMPORTANT: If you are interested and qualified, please submit a link to your resume AND your voice recording to