Skip to main content

Teiid Platform Sizing Guidelines and Limitations

Users/customers always ask us about the sizing of their Data Virtaulization infrastructure based on Teiid or the JDV product from Redhat. Typically this is very involved question and not a very easy one answer in plain terms. This is due to fact that it involves taking into consideration questions like:
  • What kind of sources that user is working with? Relational, file, CRM, NoSQL etc.
  • How many sources they are trying to integrate? 10, 20, 100?
  • What are the volumes of data they are working with? 10K, 100K, 1M+?
  • What are the query latency times from the sources? 
  • How you are using Teiid to implement the data integration/virtualization solution. What kind of queries that user is executing? Even small federated results may take a lot of server side processing - especially if the plan needs tweaking.
  • Is materializing being used?
  • Is query written in optimal way?
  • and so on..
Each and every one of the question affects the performance profoundly, and if you got mixture of those then it become that much more harder to give a specific configuration.

Before you start to thinking about beefing up your DV infrastructure, the first thing you want to check is:
  • Is my current infrastructure serving my current needs and future expectations?
  • What kind changes your are expecting?
  • Is there a change in type of sources  coming, like using Hadoop or using cloud based solutions?
We need to build the DV infrastructure on based on these available resources combined with mandated requirements for your usecase. Since Teiid is real time data virtualization engine, it heavily depends upon the underlying sources for data retrieval (there are caching strategies to minimize this). If Teiid is working with slow data sources, no matter much hardware you throw at it, you still going to get a slower response.  The place where the more memory and faster hardware can help DV is, when Teiid engine doing lots of aggregations, filtering, grouping and sorting as result of a user query over large sets of rows of results. That means all the above questions I raised may directly impact based on each individual query in terms of CPU and memory.

There are some limitations that Teiid engine itself has:

1.  hard limits which breaks down along several lines in terms of # of storage objects tracked, disk storage, streaming data size/row limits, etc.
  • Internal tables and result sets are limited to 2^31 rows. 
  • The buffer manager has a max addressable space of 16 terabytes - but due to fragmentation you'd expect that the max usable would be less (this is relatively easy to scale up with a larger block size when we need to).  This is the maximum amount of storage available to Teiid for all temporary lobs, internal tables, intermediate results, etc.
  • The max size of an object (batch or table page) that can be serialized by the buffer manager is 32 GB - but no one should ever get near that (the default limit is 8 MB). A batch is set or rows that are flowing through Teiid engine.
Handling a source that has tera/petabytes of data doesn't by itself impact Teiid in any way.  What matters is the processing operations that are being performed and/or how much of that data do we need to store on a temporary basis in Teiid.  With a simple forward-only query, as long as the result row count is less than 2^31, Teiid be perfectly happy to return a petabyte of data.

2. what are the soft limits for Teiid based upon the configuration such that it could impact sizing

Each batch/table page requires an in memory cache entry of approximately ~ 128 bytes - thus the total tracked max batches are limited by the heap and is also why we recommend to increase the processing batch size on larger memory or scenarios making use of large internal materializations. The actual batch/table itself is managed by buffer manager, which has layered memory buffer structure with spill over facility to disk. 

3. There are open file handle and other resource considerations (such as buffers being allocated by drivers) that are somewhat indirect from Teiid depending upon the particulars of the data source configurations that can have an impact as well.


4. Using internal materialization is based on buffer manager, it is directly dependent upon it.

5. When using XA the source access is serialized, otherwise source access happens in parallel. This can be controlled using # source threads/per user query.

Some scenarios may not be appropriate for Teiid.  Something contrived, such as 1M x 1M rows cross-join in Teiid, may not be a good fit for the vituralization layer.  But is that a real usecase where you are going to cursor over trillion rows to find what you are looking for? Is there a better targeted query? These are the kind of questions you need to be asking yourself when designing a data virtualization layer. 

Take look at query plan, command log and record the source latencies for a given query and see if your Teiid instance is performing optimally for your usecase. Is it CPU bound vs IO bound (larger source results and large source wait times). See if your submitted queries has been waiting in queue (you can check queue depth). Depending upon where you see the fallout is that is where you may need additional resources.

Our basic hardware recommendation is for smaller departmental use case is (double if you need HA or for disaster recovery) 
  • 16 Core processor
  • Minimum of 32 GB RAM
  • 100+ GB of buffer manager temp disk  (may be use of SSD based device will get better results when lot of cache miss or swapping of results)
  • Redhat Linux 6+
  • gigabit Ethernet
Then do a simple pilot with your own usecase(s) with your own data in your infrastructure with anticipated load. If you think that a DV server is totally CPU bound and queries are being delayed due to that, then you can consider adding additional cores to server or additional nodes in a cluster. Note again, to make to sure your source infrastructure is built to handle the load that DV is executing against it.

What would be really great would be sharing your hardware profiles that you selected for your Teiid environments, and techniques you used to get to the decision.

Thank you.

Ramesh & Steve.

Comments

Popular posts from this blog

Teiid 8.11 Beta1 and 8.10.1 Released

Teiid 8.11 Beta1 is now available from the  downloads  and maven.  Feature highlights since Alpha2 include: TEIID-3434 More caching control over ttls in the result set cache down to the schema/table level. TEIID-3412 MS Access support via the UCanAccess driver. The UCanAccess support is necessary for those running on Java 1.8 as the JDBC ODBC bridge has been removed from the JRE. The waiting continues on EAP 6.4 Alpha1 - it still should be available shortly and should be the platform target for Teiid 8.11 Beta2. Of course, let us know if you find any issues with these early releases.  There's still plenty of time to get fixes into the final release. Teiid 8.10.1 is also available.  It addresses 5 important issues discovered since 8.10 was released: [ TEIID-3409 ] - PostgreSQLExecutionFactory TranslatorProperty annotation in wrong place [ TEIID-3437 ] - Inconsistencies with row count handling [ TEIID-3438 ] - Null value returned from BlobImpl

Tech Tip: Teiid SQL Language MAKEDEP Hint Explained

In this article I will explain what a MAKEDEP hint is, how and when, why it should be used in Teiid. What: MAKEDEP is query hint.  When a query hint is defined in the SQL query it influences the Teiid query planner to optimize the query in a way that is driven by the user. MAKEDEP means "make this as a dependent join". What is a Dependent Join? For example if we have query like: SELECT * FROM X INNER JOIN Y ON X.PK = Y.FK Where the data for X, and Y are coming from two different sources like Oracle and WebService in Teiid, so in relational algebra you can represent above query as Here the result tuples from node X and node Y are being simultaneously fetched by Teiid query engine, then it joins the both the results inside Teiid engine based on the specified X.PK = Y.PK condition and returns the filtered resulted to the user. simple.. Now, what if, if X table has 5 rows and Y table has 100K rows? In order to do the JOIN naively Teiid need sto read all the 5

Teiid 8.13.3 Released

Teiid 8.13.3 is now  available .  In total 8.13.3 addresses 10 issues since 8.13.2: [ TEIID-4028 ] - adding salesforce-34 resource adapter does not work through the cli [ TEIID-4066 ] - Odata translator ClassNotFoundException: com.sun.ws.rs.ext.RuntimeDelegateImpl [ TEIID-4070 ] - Issues with resource adapters with api modules in wildfly [ TEIID-4089 ] - Teiid JDBC driver does not reset the update count when calling getMoreResults(int) [ TEIID-4093 ] - OData authentication fails with NPE when gss-pattern related properties are included in VDB [ TEIID-4096 ] - AssertionError with independent side of a dependent join that has an ordered limit [ TEIID-3050 ] - allow for more incremental insert with iterator [ TEIID-4075 ] - Netezza translator to support common table expressions [ TEIID-4098 ] - Always preserve columns order in google spreadsheets models [ TEIID-4046 ] - OData - $skip is beeing ignor