Business Intelligence Engineer (Remote)
Argentina, Brazil, Canada, United States
Telnyx
Your one-stop shop for infrastructure at the edge. The Telnyx Connectivity Cloud helps your business connect people, devices and applications everywhere.About Telnyx
Telnyx is an industry leader that's not just imagining the future of global connectivity—we're building it. From architecting and amplifying the reach of a private, global, multi-cloud IP network, to bringing hyperlocal edge technology right to your fingertips through intuitive APIs, we're shaping a new era of seamless interconnection between people, devices, and applications.
We're driven by a desire to transform and modernize what's antiquated, automate the manual, and solve real-world problems through innovative connectivity solutions. As a testament to our success, we're proud to stand as a financially stable and profitable company. Our robust profitability allows us not only to invest in pioneering technologies but also to foster an environment of continuous learning and growth for our team.
Our collective vision is a world where borderless connectivity fuels limitless innovation. By joining us, you can be part of laying the foundations for this interconnected future. We're currently seeking passionate individuals who are excited about the opportunity to contribute to an industry-shaping company while growing their own skills and careers.
The role
As a BI Engineer on the Revenue Operations team, you will design and own the data architecture for the Telnyx Growth organization. You’ll have the freedom to re-shape our growth data infrastructure using your expertise in SQL, Python, and modern data tooling. You will be the go-to expert for all things data pipelines, ETLs, and BI data modeling, ensuring that our data is reliable, well-structured, and primed for analysis in Tableau. If you love building data systems from the ground up and enabling teams to make data-driven decisions, this role is for you!
Responsibilities
- Design & Implement Data Architecture: collaborate with the data engineering team to architect and build the data infrastructure to support reporting and analytics needs. This includes designing scalable data models and tables that will feed our BI tool (Tableau), and choosing the right tools/technologies for our data stack. You’ll lay down the blueprint for how data flows from raw sources to final dashboards.
- Develop ETL Pipelines & DAGs: Create and maintain robust ETL pipelines to collect, transform, and load data from various sources (e.g., our CRM, product database, marketing platforms) into our data warehouse. Leverage DAGs (e.g., Airflow) or similar orchestration tools to schedule and manage workflows, ensuring data is up-to-date for analytics. Optimize these pipelines for scalability and reliability as data volumes grow.
- Ensure Data Quality & Governance: Establish data governance practices to ensure high data quality and security. Implement checks for data accuracy, monitor pipeline health, and maintain thorough documentation of data definitions and transformations. Manage user access and data permissions for sensitive revenue and sales data, keeping compliance and best practices in mind.
- Tableau BI Foundation Buildout: Take ownership of integrating our data with Tableau. Define how data should be structured for efficient reporting (e.g., designing aggregate tables or data extracts for Tableau). Work closely with data analysts to ensure that the datasets you prepare enable them to create self-service dashboards and visualizations easily. (Your goal is to empower analysts to build charts without worrying about data wrangling.)
- Collaborate with Cross-Functional Teams: Work with stakeholders in Revenue Operations, Sales, Marketing, and Finance to understand their data needs. Translate business requirements into technical specs for data models and ensure the data infrastructure supports key metrics (e.g., pipeline metrics, revenue forecasting, customer analytics). Provide guidance on how to best utilize the data in Tableau for their strategic decisions.
- Autonomous Project Management: Plan and prioritize your own work effectively. Regularly communicate progress and roadblocks. You are comfortable working independently with minimal supervision, while also knowing when to involve team members for input or support.
- Continuous Improvement: Stay up-to-date with modern data tools and best practices. Recommend improvements to our data stack as needed – whether it’s a new transformation tool, a better way to structure data for Tableau, or a workflow automation. As we grow, proactively refactor and scale our data pipelines and models to accommodate new data sources and increased complexity.
What we are looking for
- Experience: 2+ years of hands-on experience in data engineering, analytics engineering, or BI development. You should have built or maintained data pipelines and data models in a professional setting. Experience in a startup or high-growth company is strongly preferred – you thrive in environments with rapid change and limited structure, and you’ve built solutions from scratch (or with limited resources) before.
- Technical Skills:
- SQL expertise – ability to write complex queries and optimize them for performance. You can design normalized and denormalized schemas and understand how to build efficient data warehouses/data marts.
- Programming in Python (or similar scripting language) – especially for data processing tasks. You’ve written scripts or used frameworks for ETL, and you’re comfortable handling data in code (pandas, etc.).
- Business Intelligence Tools – direct experience with Tableau is a big plus. You know how to model data for visualization, create calculated fields or aggregated tables, and maybe have built Tableau dashboards yourself. Experience with other BI tools (Looker, Power BI, Domo) is acceptable if you can quickly learn Tableau.
- Data Modeling & Warehousing – solid grasp of data modeling concepts (star schema, snowflake schema, slowly changing dimensions, etc.) and experience structuring a data warehouse or analytics database. You can create scalable datasets that will not bog down when data grows, and you design with end-use (analytics) in mind.
- Data Governance & Quality – familiarity with data governance principles. You’ve implemented or adhered to practices ensuring data accuracy (e.g., testing data, validation checks), consistency in metric definitions, and documentation (data dictionaries). You take data quality seriously and have strategies to maintain it.
- Autonomy & Drive: Proven ability to work independently and take ownership of projects. You manage your time effectively, can set realistic milestones, and deliver without heavy oversight. This role is remote and autonomous – you should be self-motivated and comfortable defining what needs to be done.
- Collaboration & Communication: Strong communicator who can work with non-technical stakeholders. You should be able to understand the needs of the RevOps team (and related departments) and translate those into data solutions. Likewise, you can explain data concepts or pipeline status to a business audience. You’re not siloed; you engage with the team’s goals and proactively offer data insights or improvements.
- Time Zone Availability: Willing and able to overlap at least 5 hours daily with US Central Time (CST) business hours. Since our team is largely US-based, you’ll need a significant part of your workday aligning with 9am-5pm CST for real-time collaboration, stand-ups, and meetings. (For example, if you’re in a different time zone, you might shift your schedule accordingly.)
- Education: A Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field is a plus, but not strictly required if you have equivalent practical experience. Certifications or courses in data engineering or BI (e.g., Google Data Engineer cert, etc.) are nice to have. What we care about most is your ability to do the job as demonstrated by your work experience and skills.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture Business Intelligence Computer Science Data governance Data pipelines Data quality Data warehouse Engineering ETL Finance Looker Pandas Pipelines Power BI Python Security Snowflake SQL Tableau Testing
Perks/benefits: Career development Flex hours Health care Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.