LDWH 2.0.53 released: Snowflake and Parquet connectors
Version 2.0.53 includes two new connectors: Snowflake and Parquet.
Snowflake
The cloud-based Snowflake data warehouse was already part of our development branch for some time, but now finally made it to the stable release. You can now connect Snowflake as a data source or use it as your analytical storage. If you are looking for a highly scalable SaaS storage for your Logical Data Warehouse, let us know and we can connect you to Snowflake.
Parquet
Parquet is a columnar file format that is particularly popular in the Hadoop world. You can now create a Parquet data source that will allow you to write to a Parquet file from Data Virtuality via regular SQL. A statement like "SELECT * INTO parquet.destination_file_name FROM views.source" will dump the content of the view to the specified Parquet file.
Google Analytics
A small convenience improvement was made to the Google Analytics connector. Previously, the connector always returned string values that you needed to cast manually in case of numeric return values. Now the connector finally has correct data types and will directly return e.g. integer or double for the corresponding columns, so that you don't need to manually cast them anymore.
Quick access to audit log
Data Virtuality has an audit log which tracks changes of the logical data model. This feature was already present before, but only accessible by manually querying the SYSLOG schema. Now you can right-click on a schema, view or procedure and select "Show history". This will show changes that have been applied to this object. The retention of log entries is controlled via the "Clean old histories task" system job.
Full Changelog
Studio
- DVCORE-4441 (New Feature): add support for Snowflake as Data Source and Analytical Storage
- DVCORE-5836 (Improvement): make CSV option a default value for "File Format" in CSV query builder
- DVCORE-4797 (Improvement): add the context menu to view the corresponding histories
- DVCORE-4717 (Improvement): extend Redshift and JDBC/Snowflake wizards to support s3load and bucket credentials
- DVCORE-5835: materialization tooltip is incorrect
- DVCORE-5880: remove the reference information for HP from HP Vertica
Connectors
- DVCORE-5891 (New Feature): Add support for writing data into Parquet files
- DVCORE-4279 (New Feature): Snowflake: add support for Snowflake as an Analytical Storage
- DVCORE-3306 (Improvement): Map Google Analytics data types to corresponding Data Virtuality data types
- DVCORE-5300 (Improvement): Implement push down for TIMESTAMPCREATE to PostgreSQL
- DVCORE-5854: BINARY keyword added to WHERE clause on rewriting slows down MySQL and MemSQL queries
- DVCORE-5521: Redshift: materialization gets stuck in state RUNNING
- DVCORE-5816: Provide exceptions for Google Ads report fields that do not always return double
- DVCORE-5733: Multiple concurrent requests per connection to MySQL/Netezza data source cause errors
Backend
- DVCORE-4803 (Improvement) Add parentId in all the history entities to map them to their corresponding actual entities
- DVCORE-5826: Make table names in UTILS.formatTableName and UTILS.getTableIntersection case-insensitive
- DVCORE-5601: COUNT(*) used on an EXCEPT/INTERSECT of two sub-queries based on different data sources fails with ASSERTION FAILED error
- DVCORE-5599: Error on complex query with grouping expression that uses alias columns with the same names
- DVCORE-5568: LEFT JOIN with GROUP BY fails if ON clause contains a comparison to constant and tables are from different sources
- DVCORE-5229: Cycle detection when view and procedure share the identical name and one references the other
- DVCORE-5138: REPEAT function truncates strings to 4000 characters
Please sign in to leave a comment.
Comments
0 comments