ATTENTION: Recruiter - please talk to me before posting job -- thanks!
Client: Cerebri AI
Position: Database Engineer
Type: Full Time
Work Address: 25 Adelaide Street East, Toronto
Salary: $110 - $120 (if you need to go higher let me know)
Stock Options: 25% vests after 1st year and then it vests monthly over the next 36 months
Vacation: 2 weeks to start, but putting an unlimited vacation policy in place
Benefits: Have sent email for high level overview (drugs/dental)
Why hiring: they did some restructuring, and this is a new role
# on team: 24
Start date: early January or asap
When did it open: November 12th
How many interviews: less than 6
Where did the candidates come up short: It is very important for the person to be able to wear multiple hats and the candidates that they have interviewed are more in architecture and did not have the required hands on engineering and DBA experience
Reports to: VP Software
1st is an HR screen 45-60 minutes via Zoom
Important that they have done their research on Ceribri AI/finds out who they are/get to know them
2nd is a Technical test next in person and interview with the VP Software
Any remote work: would consider some remote work for a superstar not in the GTA
Core work hours: 9am to 5pm
This is not a 9-5 job though – need to roll up their sleeves to
complete deliverables, there are times when there could be lots of overtime to meet
Travel: There will be about 25% travel to client sites in Canada and the
US and also to other Cerebri AI offices in Austin and Washington.
What is the office environment like: will get information around this and will be
Booking a time week of Jan 7th to visit office.
Why would someone want to leave a company that they are currently with to join your team at Cerebri AI in Toronto?
1. AI startup – an opportunity to see what an end to end machine learning application looks like – this is a new AI software that has not been done before
2. Their lead investor is M12 (Microsoft). M12 is the largest AI company in the world
3. Working with cutting edge tools and technology that will add value to your
4. From a deployment perspective, you will be working with senior people
In organizations – opportunity to get customer/client relationship experience on your resume.
MUST HAVE SKILLS:
- Postgres experience is a must have. Experience on MySQL would be nice/ asset.
- SQL query tuning and optimization.
- Physical and logical database modelling – Data Warehousing/ BI design and development. Backend architecture experience.
- Some DBA experience – database management and maintenance.
- Documentation and communication skills.
- Background check and cross border visa (travel to USA).
- Enterprise level experience - It is very important for the person to be able to wear multiple hats and the candidates that they have interviewed are more in architecture and did not have the required hands on engineering and DBA experience
NICE TO HAVE SKILLS:
- Big data ecosystem – Hadoop, scala, spark, kafka.
- Understanding of machine learning – supporting analytical teams.
About Cerebri AI
Cerebri AI, a venture-backed pioneer in artificial intelligence and machine learning, is the creator of Cerebri Values™, the industry’s first universal measure of customer success. Cerebri Values quantifies each customer’s commitment to a brand or product and dynamically predicts “Next Best Actions” at scale, which enables large companies to focus on accelerating profitable growth. Deployed as a SaaS application running on Microsoft Azure, Cerebri Values operates behind the corporate firewall, ensuring the highest level of security and safeguarding personal information. Headquartered in Austin with offices in Toronto and Washington, DC, the company has 50 employees who have been awarded over 130 patents to date. To learn more, visitcerebri.com.
Our client currently seeks a Database Engineer to join their growing team in downtown Toronto. In this role you will build, optimize and maintain conceptual and logical database models, data stores, and supporting procedures. If you have enterprise experience along with hands on engineering and DBA experience we want to hear from you!
· Define, drive, manage (and implement part of) enterprise data strategy and roadmap.
· Create architecture diagrams, and technology navigation map.
· Recommend and design system architectures to align with strategic business objectives.
· Lead data governance, review all data and database schema changes, and implement as needed.
· Work with MDM / Metadata repositories, Metadata lifecycle management.
· Map various Enterprise Metadata models to Cerebri internal models.
· Work with data scientists and analysts to put machine learning models into production (this is rollup the sleeves).
· Build tools, frameworks and dashboards to support running experiments and analyses.
· Write code to develop new software products and/or features, manage individual project priorities, deadlines and deliverables.
- 7+ years’ experience developing large scale, highly available distributed systems.
- 7+ years programming in SQL and python.
- Experience setting up and managing multi-TB Data warehouses, Distributed Relational / NoSQL / Graph / Hive Data Clusters.
- Background building data pipelines.
- Understanding of security technologies: PKI, Crypto, identity management.
- Design of data access APIs (see note on GraphQL).
- Design of analytics capture and presentation.
- Optimization of database design and SQL/NoSQL data-stores for latency, performance.
- Experience with data curation (ETL, CRUD, audit) processes to transform data from a variety of data sources to normalized forms.
- Understanding of microservices architecture and Docker infrastructure.
- Experience building data pipelines leveraging open source technologies like Kafka, Hadoop, Hive, and Spark.
- Experience working with RDMS databases (e.g. PostgreSQL or SQL Server), managing connection-pools, performance tuning and optimizations).
- BS / MS / PhD in in Computer Science or related engineering field.
- Strong sense of ownership, passion to build quality products for massive scale in collaborative, agile environment and excitement to learn.
Nice to haves...
- Experience with data governance and managing large data schemas.
- Experience with international standards for security and privacy (HIPAA, Privacy Shield, GDPR, PCI).
- Demonstrated hands-on experience with blockchain technology, ideally with Hyperledger Fabric or Ethereum Enterprise implementation.
- Background building data pipelines leveraging Azure/Microsoft technologies e.g. Azure Data Lake, HDInsights.
- GraphQL .