Senior Databricks Developer for Ciklum Digital
3 днів тому


On behalf of Ciklum Digital, Ciklum is looking for a Senior Databricks Developer to join the Kyiv team on a full-time basis.

Project description :

  • The project is about Data Processing & Analytics, team will be using tools like : Azure Data Services (ADFv2, ADLS, Cosmos DB etc.
  • Azure Databricks, Spark, Python, SQL, Power BI. Team will be working closely with the Client’s Data & AI practice members
  • It is a great opportunity to learn Azure best practices. Working with UK’s No. 1 Azure development partner, developers will get further experience in using containers inside Azure, combined with Azure native PaaS components.
  • Azure environments will be built according to best practices. Azure DevOps fully automated CI / CD will be built and utilised during the project.

    Fast paced, agile environment


  • Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process data quickly at big data scales
  • Contributes to design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storages
  • Owns one or more key components of the infrastructure and works to continually improve it, identifying gaps, and improving the platform’s quality, robustness, maintainability, and speed
  • Cross-trains other team members on technologies being developed, while also continuously learning new technologies from other team members
  • Interacts with engineering teams and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability
  • Performs development, QA, and dev-ops roles as needed to ensure a complete end to end responsibility of solutions
  • Contribute to CoE activities and community building, participate in conferences, provide excellence in exercise and best practices
  • Requirements

    You can name examples of use in different contexts. Are guided by best-practices and specifications of such skills :

  • 3+ years of experience coding in SQL, Python, and, desirably, Scala, with solid CS fundamentals, including data structure and algorithms design
  • 2+ years of contribution to production deployments of large backend data processing and analysis systems as a team lead
  • 1+ years of hands-on implementation experience working with a combination of the following technologies : SQL and NoSQL data warehouses, such as Hbase and Cassandra, as well as Hadoop, Map Reduce, Pig, Hive, Impala, Spark, Kafka, Storm
  • 1+ years of experience in the Azure cloud data platforms
  • Good understanding of Azure Data Lake gen2 and Azure DevOps platform, including CI / CD Pipelines
  • Practical experience in deployments automation by using CI / CD pipelines
  • Good understanding of DevOps principles
  • Databricks hands-on experience
  • Knowledge of Apache Spark and PySpark
  • Knowledge of SQL and MPP databases (e.g., PDW, Exadata, Vertica, Netezza, Greenplum, Aster Data)
  • Knowledge of professional software engineering best practices for the full software
  • Knowledge of Data Warehousing, design, implementation and optimization
  • Knowledge of Data Quality testing, automation, and results visualization
  • Knowledge of BI reports and dashboards design and implementation
  • Knowledge of development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
  • Extensive experience of data-oriented projects
  • Experience participating in an Agile software development team, e.g., SCRUM
  • Experience designing, documenting, and defending designs for critical components in large distributed computing systems
  • A consistent track record of delivering exceptionally high-quality software on large, complex, cross-functional projects
  • Demonstrated ability to learn new technologies quickly and independently
  • Ability to handle multiple competing priorities in a fast-paced environment
  • Undergraduate degree in Computer Science or Engineering from a top CS program required. Masters preferred
  • Desirable

    You should have an idea of the subject and its parts. Can explain. Has experience of use of such skills :

  • Experience with supporting data scientists and complex statistical use cases highly desirable
  • Understanding of cloud infrastructure design and implementation
  • Experience in data science and machine learning
  • Experience in backend development and deployment
  • Experience in CI / CD configuration
  • Good knowledge of data analysis in enterprises
  • Personal skills

  • Curious mind and willingness to work with client in consultivitive manner to find areas to improve
  • Intermediate +++ English
  • Good analytical skills
  • Good team player, motivated to develop and solve complex tasks
  • Self-motivated, self-disciplined and result-oriented
  • Strong attention to details and accuracy
  • What's in it for you

    A Centre of Excellence is ultimately a community that allows you to improve yourself and have fun. Our centers of excellence (CoE) bring together all Ciklumers from across the organization to share best practices, support, advice, industry knowledge, and to create a strong community.

  • Close cooperation with client
  • A constant flow of new projects
  • Dynamic and challenging tasks
  • Ability to influence project technologies
  • Projects from scratch
  • Team of professionals : learn from colleagues and gain recognition of your skills
  • European management style
  • Continuous self-improvement
  • Повідомте про це

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Надіслати заяву
    Моя електронна адреса
    Клацнувши по кнопці "# кнопка", я даю згоду neuvoo на обробку моїх даних та надсилання сповіщень електронною поштою, як це детально описано в Політиці конфіденційності neuvoo. Я можу будь-коли відкликати свою згоду або скасувати підписку.