Vero Designer 2.0 Released!

vero data blending aws redshift data uploader visualize

We are beyond excited for this release.  It truly rounds out our platform and better compliments your visualization tools like Tableau.  To say this is a huge improvement would be an understatement.  From faster data blending to direct high-speed data uploads on Amazon RedShift, almost every component saw an upgrade while several key features were added.  I want to thank our customers for working closely with us providing clear use cases, access to their source data, and amazing feedback.  Here is a summary of some of the new capabilities of our platform:

amazon redshift data uploader

You can now wrangle and upload CSV and Excel files directly to Amazon RedShift from your desktop.  In our tests, we were able to upload a 1 million row CSV file end to end in under 2 minutes.  If configured with AWS S3 credentials, Vero will automatically use the COPY command for faster data loads into RedShift.

Use cases

  • Enable analysts to maintain custom data in RedShift
  • Reduce the workload of ETL teams
  • Speed up Tableau Visualizations by offloading custom file blending to the more powerful RedShift database for big data

LEARN HOW THE VERO UPLOADER WORKS



Export data to SQL Tables

Now you can export data as materialized tables into one of the many data sources we support (i.e. Oracle, Postgres, MySQL, RedShift, Teradata, MS SQL Server).  Vero handles the SQL Generation, Type Resolution, and Data Movement.  Data preparation on SQL data sources just got a whole lot easier.  Visualize Faster!

Use cases

  • Create blended physical reporting tables for your Visualizations.  For big data, this would be faster than live querying across data sources

  • Complex calculations like Level of Detail (LOD) measures in Tableau, could significantly benefit from having the calculations done separately and stored in dedicated tables

  • Speed up your analytics and remove analytical load from your operational systems.  Use Vero to move reporting data to a separate and dedicated database like RedShift

  • Have mission critical visualizations powered by large and big data?  Cubes and data extracts provided by visualization tools often don't scale well.  Instead of cubes and data extracts, move data to a dedicated mission critical visualization database 

IMproved data pipelining

SQL can quickly get complex and unwieldy as your analytics get complex. Eventually you end up with 1000's of lines of SQL, stored procedures, and logic encoded in Database Views and in your Visualization tool.  Knowledge gets lost and maintenance becomes a nightmare.  Vero's block based approach to building reports allows you to intuitively layout your reports by creating a reusable library of expressions and logic blocks.  This release builds on a strong foundation to truly deliver the best data preparation solution for SQL data sources.

use cases

  • Wrangle and import secondary sources for on demand data blending
  • Aggregate and filter data to one level then prune data down to a sharper detail with fine grained control
  • Calculate measures in one step, then use the same measure as a grouping in second step, then aggregate it once again
  • Blend data across several big fact tables within your data warehouse
  • Recategorize dimension values based on measures and dimensions from other fact tables
  • Mix and match data blending and temporary table blocks between reports
  • Learn more about the 5 Styles of Data Blending

100X Faster execution engine

We completely ripped out our old execution engine and built a faster and lighter solution from scratch.  Our tests show a 100x and often larger increase in performance over Vero 1.0.  Secondly, our data blending capabilities have been beefed up to better handle data type translations between relational databases and other sources