Connectors
Files & Object Storage
Cloudflare R2
5min
cloudflare r2 or r2 is a service offered by cloudflare, a well know web enablement company that has extended its offerings with object storage https //en wikipedia org/wiki/object storage through a web service https //en wikipedia org/wiki/web service interface cloudflare r2 uses the same scalable storage infrastructure that it uses to run its infrastructure provided already to tens of thousands of customers cloudflare r2 can store any type of object, which allows uses like storage for internet applications, backups, disaster recovery, data archives, data lakes https //en wikipedia org/wiki/data lake for analytics, and hybrid cloud storage https //en wikipedia org/wiki/cloud computing#hybrid cloud our cloudflare r2 storage datalakehouse io integration replicates files stored in your r2 buckets to your cloud data warehouse target synchronizes to your target destination at a scheduled frequency it allows you to replicate/synchronize your data, including capturing snapshots of data at any point int time, and keep it up to date with little to no configuration efforts you don’t even need to prepare the target schema — datalakehouse io will automatically handle all the heavy lifting for you all you need is to specify the connection to your r2 bucket, point to your target system, or use a datalakehouse io managed data warehouse and datalakehouse io does the rest our support team can even help you set it up for you during a short technical on boarding session setup instructions datalakehouse io securely connects to your cloudflare r2 storage bucket using the form in the datalakehouse io portal please complete the following basic steps enter a name or alias for this connection, in the 'name/alias' field, that is unique from other connectors enter a 'target schema prefix', which will be the prefix for the schema at the target you will sync your data files into enter a 'bucket' name, where your files are stored typically starts with https //, so enter just the name without the prefix select your 'region' as default (global/auto) enter your 'access key', credentials to access the bucket enter your 'secret key', credentials to access the bucket enter any other optional details in the available fields (see the setup video if you need help or contact support) folder path, is a path on the root bucket from where desired files will be retrieved file pattern, is a regular expression (regex) used to isolated only certain files to be retrieved file type, allows for a pre determined type of file extension to be retreived click the save & test button once your credentials are accepted you should be able to see a successful connection how to instructions control each column data type sql transformations docid 5zjgrvbhtywqw8 olioh0 allow logic to be executed against a target connection based on a scheduled frequency or triggered event of new data on tables updated via datalakehouse io (dlh io) this especially helps when you want to control the data type set in your target connection since all columns are set as varchar(16777216) issue handling if any issues occur with the authorization simply return to the sources page in datalakehouse io, edit the source details and click the 'save & test' button to confirm connectivity if any issues persist please contact our support team via the datalakehouse support portal https //datalakehouse zendesk com