Current Openings

Discover your top skills, introduce yourself and we’ll match you to the best role.

We have one of the best vetting processes in the industry involving six interconnected stages.

  • Profile Screening & Shortlisting
  • Communication Skills
  • Skill Set & Culture Fit Evaluation
  • Technical Evaluation
  • Interview with Clients
  • Background Checks

Current Openings

If you are rocking some serious tech skills…we’d like to chat with you!

  • 🔍 Now Hiring: Data Engineer – Azure | Databricks | Big Data

    Location: Remote
    Experience: 5+ Years
    Employment Type: Full-Time

    We’re seeking a Senior Data Engineer with a strong foundation in modern data platforms and cloud technologies to join our growing team. This role demands technical expertise in data pipeline development, orchestration, automation, and scalable infrastructure using Azure, Databricks, and open-source tools.

    Key Responsibilities:
    Design, implement, and maintain ETL/ELT pipelines using Python, Scala, or Java.
    Ingest and transform data across RDBMS (SQL Server, PostgreSQL, MySQL) and NoSQL databases (e.g., Cassandra, Yugabyte DB).
    Utilize Azure Data Factory (ADF) to orchestrate and automate complex workflows.
    Build scalable data transformation solutions on Databricks using PySpark.
    Set up data infrastructure and tools for development, testing, release, and support.
    Define and enforce best practices for data lifecycle management and automation.
    Implement CI/CD pipelines for data workflows and analytics workloads.
    Use tools like Apache Airflow or Oozie for job orchestration.
    Collaborate with cross-functional teams and manage external stakeholder expectations.
    Monitor system performance, troubleshoot data pipeline issues, and perform root cause analysis.
    Guide and mentor junior data engineers and team members.
    Track KPIs and support data-driven decision-making through reporting.


    Required Skills & Experience:
    Minimum 5+ years of experience in Data Engineering or Big Data environments.
    Strong expertise in Azure cloud services, especially ADF and Databricks.
    Proficiency with PySpark, SQL, and big data processing frameworks.
    Hands-on experience with Terraform, Kubernetes, and cloud infrastructure tools.
    Deep understanding of data warehouse design, data modeling, and performance tuning.
    Experience working with open-source data stack technologies.
    Familiarity with Agile development and DevOps principles in a data context.
    Experience in ETL tools such as Informatica PowerCenter is an advantage.
    Excellent communication, collaboration, and presentation skills.