About the Role
Abnormal Security empowers enterprises of all sizes to combat cybercrime with our advanced cloud products. As the threat landscape continuously evolves, we emphasize vigilance and adaptability in our defense strategies. The Detection Data Science team plays a crucial role in providing insights into current and historical product performance, emerging attack trends, and opportunities to enhance our detection capabilities. This requires accurate and timely data sourced from numerous product applications and thousands of detection signals across billions of emails. The team integrates all relevant data into a comprehensive analytics pipeline using DBT (Data Build Tool). This pipeline facilitates a wide range of functions, including the creation of metrics and dashboards, data analysis, model development, and the generation of customer reports.
Abnormal Security is looking for a Analytics Engineer to join the Detection Data Science team. The Analytics Engineer will be dedicated to developing, maintaining, and expanding the DBT analytics pipeline to support Abnormal Security's data-driven decision-making processes. This role is core and central to the Data Science’s team charter to ensure the reliability, scalability, and efficiency of our analytics infrastructure, enabling the organization to harness the power of data across all regions.
What you will do
- Full ownership of the DBT analytics pipeline, ensuring its reliability and performance through constant monitoring, maintenance, and optimization
- Responsible for designing, developing, and implementing innovative data models to enhance our analytics capabilities, while collaborating closely with stakeholders to deliver top-notch data solutions.
- Expand the DBT analytics pipeline into new regions, making sure to comply with regional data regulations and standards.
- Build and maintain the DBT analytics pipeline within the FedRAMP environment, adhering to strict security and compliance requirements. Additionally, you'll create and manage a Global DBT pipeline, integrating non-PII data from all non-regions into a unified workspace.
- Ensuring data consistency, accuracy, and accessibility across our global analytics infrastructure will be a key part of your responsibilities.
- Innovation: Continuously seek and implement innovative solutions to improve the analytics pipeline.
Must Haves
- 3+ years of experience working in large scale data-driven landscapes especially in the data transformation layers.
- Expert in writing complex queries and building data models in SQL.
- 2+ years of experience with using the DBT (Data build tool) technology
- Understanding and application of software engineering principles to analytics code
- Experience on working in 1 programming language ( Python / R ) to handle various data orchestration tasks in a workflow management software (eg. Airflow)
- BS degree in Computer Science, Applied Sciences, Information Systems or other related quantitative fields.
- Proven experience translating business processes into data structures that are optimized for analysis.
- Proven experience working effectively with cross-functional teams.
Nice to Have
- MS degree in Computer Science, Electrical Engineering or other related engineering field
- Experience with data pipeline tools ( Snowflake, Redshift etc) or ETL tools ( AWS Glue )
- Experience with Business Intelligence tools (e.g., Tableau, Looker)