Connectors
...
Ceridian : Change Log & Notes
Dayforce: Primary Key Tables and Raw Data
8 min
dayforce non primary key tables the ceridian dayforce docid 3f3b3 toiuutq366unalv ("dayforce") integration from dlh io is the best data synchronization solution available on the market today for dayforce customers customers often ask about a misconception regarding the dynamic integration with dayforce to synchronize data from it to any downstream application being a mirror of the dayforce application itself one such difference is the use of unique records and primary keys which signify uniqueness of a records in a dataset this is particularly interesting because most of the dayforce data that can be synchronized is provided as transactional data meaning that the data feed for those entities/tables provided by dayforce have no unique identifiers that enable direct reference to previous records without requiring some transformation or comparision process to align the data in order for it to be updated with any changes (or reference to deleted records in the dayforce application itself) this means that records that are changed in the dayforce application appear in the dayforce feed as a point in time modification in data architecture principles this flow of information is deemed as transactional when a system having any new, updated, or deleted information is provided as a unique row, as if pulling or synchronizing the data for the first time why is this important? this is important because it impacts how downstream reporting is assessed knowing that data for many tables/entities is transactional means that any attempt to manipulate that row of data in transit may cause data integrity issues the best approach is to land that data transactionally as it synchronizes from dayforce to your target destination, such as snowflake, azure synapse, databricks, etc , and then perform some transformation or cleansing on that raw ("bronze") data layer this post processing transformation is important because it allows each organization's unique business requirements to be handled on the raw data, not from some other process' interpretation of the raw data another example of why this is important is that some organizations deem that tracking the changes of updates for analytics or historical trend analysis is important for reasons of data science, analytics, data warehousing, etc if the raw data was not brought in transactionally to the target tables when it is expressed by dayforce to be as such, then customer's who desire this change or trend analysis would be unable to achieve their perspective or reporting goals transactional raw data dlh now ensures the integrity of transactional data landing into customer's targets systems without any cleansing or updates this structural integrity was based on analysis of dayforce data feeds and feedback from customers using the ceridian dayforce docid 3f3b3 toiuutq366unalv source connector all customer dlh accounts now have access to create one or more pre or post sql sql transformations docid 5zjgrvbhtywqw8 olioh0 s to adjust any incoming or existing data based on their business needs this includes things like marking metadata columns such as dlh is active and dlh is deleted to true or false depending on the business logic requirements you can find examples of these transformations on the main ceridian dayforce docid 3f3b3 toiuutq366unalv page the dlh io support team can assist with this exercise for your implementation or it can be accomplished with some mid level sql skill by following the general advise in the docs other transformation recommendations are using dlh dbt or dbtcloud or coalesce for more robust transformation and cleansing requirements as most data warehouse or lakehouse architectures should have any way dayforce pk tables below are the dlh data synchronization tables for dayforce that have primary key concepts during an april 2025 update and after this timeframe for any new dayforce connectors, any other tables than these listed below will be without a structural or constraint based primary key table / entity name has pk (y/n) certification y contact information type y course y deduction definitions y department y document y earning definitions y employee y employee aus superannuation y employee aus superannuation rule y employee punch y employee punch labor metrics y employee punch transfers y employee raw punch y employment status y employment status group y employment status reason y job y labor metric code y labor metric type y location address y org unit y org unit child y org unit legal entities y org unit parents y pay class y pay group y pay type y payroll election y policy association y position y project y training program y for clarity, when any tables other than these listed or synchronized, if they contain pen duplicate (where there are rows with similar data, and perhaps associated, but one or more of the columns is different) records no merging of that newly synchronized record will occur an example of this would be in the case of the employee work assignment table it does not (or no longer) has a primary key structure constraint and there for if on day 1 a sync bridge pulls data resulting in the following xref employee status effective end sync ts 123 xyz active null 2025 03 01 and then on day 2, that same employee has changed the previous work assignment, and the sync bridge retrieves this record, xref employee status effective end sync ts 123 xyz term 2025 03 02 2025 03 02 then the main target table in your destination schema will have two records they will not be merged because the business rules for each customer cannot be determined by the dayforce feed, no pk exists, and therfore this is a transactional table the result would look as follows xref employee status effective end sync ts 123 xyz term 2025 03 02 2025 03 02 123 xyz active null 2025 03 01 in this case you can read this use case documentation, for the employee work assignment docid\ p3j8wlwiyw2vmsnjtp9u3 solution, which discusses how to use a sql transformations docid 5zjgrvbhtywqw8 olioh0 to update the previous null effective end date and/or dlh is active fields with the latest effective end date and false values respectively it is important to note that dlh customers should not manually update any of the target destination tables or schemas that are managed by dlh meaning the database schemas created by dlh for each source connection should only be updated by the dlh processes, unless otherwise recommended by dlh otherwise, synchronization issues and delays may occur requiring additional manual integrity efforts to be conducted to resolve said issues what to do if you find any issue? if you encounter any problems based on the integrity of the transactional raw data synchronized, such as absolute duplicates (duplicates where there are rows with each column's data exactly matching, compared to a pen duplicate status where there are rows with similar data but one or more of the columns is different), data not loading or synchronizing when compared to the source application, etc please contact dlh customer support docid\ pbtuxndqrdogoroejbgsv