Snowflake

Customers on an Enterprise or Growth plan can access Data Pipeline as an add-on package. See our pricing page for more details.

This guide describes how Mixpanel data is exported into a Snowflake dataset. Create a pipeline to export your Mixpanel data into Snowflake. Once an export job is scheduled, Mixpanel exports data to Snowflake on a recurring basis.

Design

Mixpanel exports data to your database. We first load the data into a single-column raw (VARIANT type) data table. Then, we create a view to expose all properties as columns.

IP Restrictions

Mixpanel Data Pipelines supports static IP addresses for Snowflake connections when IP restrictions are configured on your Snowflake instance. If you are using Snowflake Network policy to restrict access to your instance, you might need to add the following IP addresses to the allowed list:

US

34.31.112.201
35.184.21.33
35.225.176.74

EU

34.147.68.192
35.204.164.122
35.204.177.251

IN

34.47.224.29
34.93.42.83
35.244.19.238

Set Export Permissions

Step 1: Create a Role and Grant Permissions

Create a role (MIXPANEL_EXPORT_ROLE as an example) and grant access on your database, schema, warehouse to the role. Replace <database name>, <schema name>, <warehouse name> with actual names.

CREATE ROLE MIXPANEL_EXPORT_ROLE;
GRANT ALL ON DATABASE <database name> TO ROLE MIXPANEL_EXPORT_ROLE;
GRANT ALL ON SCHEMA <database name>.<schema name> TO ROLE MIXPANEL_EXPORT_ROLE;
GRANT USAGE ON WAREHOUSE <warehouse name> TO ROLE MIXPANEL_EXPORT_ROLE;
GRANT OPERATE ON WAREHOUSE <warehouse name> TO ROLE MIXPANEL_EXPORT_ROLE;
GRANT MONITOR ON WAREHOUSE <warehouse name> TO ROLE MIXPANEL_EXPORT_ROLE;

Step 2: Create a Storage Integration

Mixpanel stages exported data in a GCS bucket (gcs://mixpanel-export-pipelines-<project-id>) before loading it into your warehouse. This bucket is created and managed by Mixpanel — you do not need to create it yourself. To allow Mixpanel to load data from this bucket into your Snowflake warehouse, create a GCS storage integration and grant it to the role. Replace <project-id> with your Mixpanel project ID.

CREATE STORAGE INTEGRATION MIXPANEL_EXPORT_STORAGE_INTEGRATION
  TYPE = EXTERNAL_STAGE
  STORAGE_PROVIDER = 'GCS'
  ENABLED = TRUE
  STORAGE_ALLOWED_LOCATIONS = ("gcs://mixpanel-export-pipelines-<project-id>");
GRANT USAGE ON INTEGRATION MIXPANEL_EXPORT_STORAGE_INTEGRATION TO MIXPANEL_EXPORT_ROLE;

Step 3: Authenticate the User

Refer to Step 2: Creating the Pipeline to create a data pipeline via the UI.

We provide two different authentication methods: password and key-pair. In the example, create a user with either a password or public key and then grant the role to the user. You can find the public key in the Snowflake pipeline creation UI. If you already have a user, change the fields and grant the role.

password authentication

CREATE USER MIXPANEL_EXPORT_USER PASSWORD='<password you provided>' DEFAULT_ROLE=MIXPANEL_EXPORT_ROLE;
 
ALTER USER MIXPANEL_EXPORT_USER SET PASSWORD='<password you provided>';
GRANT ROLE MIXPANEL_EXPORT_ROLE TO USER MIXPANEL_EXPORT_USER;

key-pair based authentication

CREATE USER MIXPANEL_EXPORT_USER RSA_PUBLIC_KEY='<mixpanel generated key>' DEFAULT_ROLE=MIXPANEL_EXPORT_ROLE;
 
ALTER USER MIXPANEL_EXPORT_USER SET RSA_PUBLIC_KEY='<mixpanel generated key>';
GRANT ROLE MIXPANEL_EXPORT_ROLE TO USER MIXPANEL_EXPORT_USER;

Partitioning

The data in the raw tables is clustered based on the time column in the project’s timezone. To be exact, we use CLUSTER BY (TO_DATE(CONVERT_TIMEZONE('UTC','<TIMEZONE>', TO_TIMESTAMP(DATA:time::NUMBER))) where TIMEZONE is the Mixpanel project’s timezone.

Queries

A query is a request for data results. You can perform actions on the data, such as combine data from different tables; add, change, or delete table data; and perform calculations.

Snowflake supports an Object type that can store JSON objects and arrays. Mixpanel exposes array and object top-level properties as Object columns in the view.

Here is an example of how you can query the raw table when using one table for all the events. DB_NAME and SCHEMA_NAME should be replaced by your Snowflake database and schema name.

SELECT count(*)
FROM <DB_NAME>.<DB_NAME>.MP_MASTER_EVENT_RAW
WHERE DATA:event_name::string = 'sign up';

Here is an example of how you can query the view when using one table for all the events:

SELECT count(*)
FROM <DB_NAME>.<DB_NAME>.MP_MASTER_EVENT
WHERE event_name = 'sign up';

Getting the number of events in each day

You will need this if you suspect the export process is not exporting all the events you want. As the time column in the tables is in UTC timezone, you first need to convert that to your Mixpanel project timezone, and then, get the number of events for each day. The following query will do that for you.

SELECT
  TO_DATE(CONVERT_TIMEZONE('UTC','<PROJECT_TIMEZONE>', time)) as ttime,
  count(*)
FROM <DB_NAME>.<DB_NAME>.MP_MASTER_EVENT
WHERE ttime>=TO_DATE('2021-12-03') AND ttime<=TO_DATE('2024-09-01')
GROUP BY ttime
ORDER BY ttime;

This example returns the number of events in each day in your project timezone. PROJECT_TIMEZONE, DB_NAME and SCHEMA_NAME should be replaced by your Mixpanel project timezone and your snowflake database and schema name.

Querying the identity mapping table

When using the ID mappings table, you should use the resolved distinct_id in place of the non-resolved distinct_id whenever present. If there is no resolved distinct_id, you can then use the distinct_id from the existing people or events table.

Below is an example SQL query that references the ID mapping table to count the number of events in a specific date range for each unique user in San Francisco.

SELECT
  COALESCE(mappings.resolved_distinct_id, events.distinct_id) AS resolved_distinct_id,
  COUNT(*) AS count
FROM
  <DB_NAME>.<SCHEMA_NAME>.MP_MASTER_EVENT events
FULL OUTER JOIN
  <DB_NAME>.<SCHEMA_NAME>.MP_IDENTITY_MAPPINGS_DATA mappings
ON
  events.distinct_id = mappings.distinct_id
  AND events.properties:"$city"::STRING = 'San Francisco'
  AND TO_DATE(CONVERT_TIMEZONE('UTC','<PROJECT_TIMEZONE>', events.time)) >= TO_DATE('2020-04-01')
  AND TO_DATE(CONVERT_TIMEZONE('UTC','<PROJECT_TIMEZONE>', events.time)) <= TO_DATE('2024-09-01')
GROUP BY
  COALESCE(mappings.resolved_distinct_id, events.distinct_id)
LIMIT
  100;

Was this page useful?