A new property violatedConstraintMaxSampleSize
is added to Delivery.
If no value is set when creating a delivery the property will default to a value of 1.000.
This value will limit the number of records that will be registered in ViolatedRecord
and ViolatedConstraint
.
For each delivery a count of the number of violations for each validated constraint is registered in DeliveryViolatedConstraintCount
.
The entity RelationshipConstraint
is removed.
A relationship constraint is from now on available on both directions of the relationship (if applicable).
The entity ParentRelationshipExistsConstraint
registers all relationship constraints from a child entity key to a parent entity key. This is the traditional parent-child relationship.
The entity ChildRelationshipExistsConstraint
registers all relationship constraints from a parent entity key to a child entity key where at least a child entity instance should exists for a given parent entity instance.
An example: An order should at least have one line item.
ViolatedRecord
and ViolatedConstraint
When creating a Delivery
a maxSampleSize
can be specified (the default value is set to 1.000). Instead of registering each and every violation in ViolatedRecord
and ViolatedConstraint
the number of records registered can be limited to the value specified in violatedConstraintMaxSampleSize
. This will significantly improve the throughput of the validation process and lowers the storage cost in case of a high volume of constraint violations. This sample size value is per constraint. So for each constraint a maximum of sample size records will be registered, including the record on which the constraint violated.
The set of violated records is first ordered so we will always register the records with the most violated constraints.
The validated constraints and the functionality of the validation process is not affected. Still each and every constraint to be validated will be executed. The total count of violations for each and every constraint will now be registered in DeliveryViolatedConstraintCount.
If a full log of each and every record with violations is still required you should set a high value for maxSampleSize
when creating a Delivery
. This affects the storage and throughput as registering high volumes of violatedRecords
and violatedConstraints
can be costly.
deliveredEntity
When creating a Delivery
with a list of DeliveredEntities
up till now you should specify a value for the snapshotDatetime
for each DeliveredEntity
. In this release if a value is not set for the property snapshotDatetime
it will default to system datetime.
With this change issues with different clock times on different servers can be circumvented by letting the i-refactory server decide regarding system datetime.
When a relationship
in the logical validation model is set to a cardinality 1..* at least one child instance should exist for a parent instance (see the image below how you can set this property in Powerdesigner). This mandatory child existence constraints is now automatically checked for each relationship where the cardinality is set to 1..*. This eliminates the need to create a custom entity set constraint for mandatory child existence constraints.
This constraint type is registered in ChildRelationshipExistsConstraint
. The regular relationship from child to parent is registered in ParentRelationshipExistsConstraint
.
If an entity
in a logical validation model has an attribute
that plays a role in more than one relationship
where one of them plays a role in a dependent relationship and another one plays a role in an independent relationship you had to create a computed column. From now on you do not have to create a computed column in other to be able to properly map this information to a central facts context entity.
Given the example logical validation model a Nation
has a dependent reference with Snapshot Date
and an independent reference with Region
. The SNAPSHOT DATE
attribute plays 2 roles: a reference to Snapshot Date
and a reference to Region
.
In the corresponding fact model Nation
is dependent on Snapshot Date
and the relationship between Nation
and Region
is stored in a context entity Nation Region
.
We can now map the attribute SNAPSHOT DATE
to the context entity Nation Region
to both the ID
column and the Region ID
column. In prior releases this was not possible.
Attribute value constraints were only executed if the attribute contains a value after having executed the attribute datatype constraint.
We will now execute the attribute value constraints if the attribute has a value in the technical staging table or the attribute still has a value after having executed the attribute datatype constraint.
In order to be able to implement this change we had to create a distinct column name for the technical staging column name versus the logical staging column name. We have implemented this by prefixing and postfixing each technical staging attribute with a tilde. We assume that no entities exists where this might result in a name conflict.
For example creating an entity with the following column names is not allowed:
CREATE TABLE wrong_column_naming
(
"name" varchar(15)
,"~name~" varchar(10)
)
Why? It results in the following intermediate table (which is generated runtime) with a conflict on name:
CREATE TABLE #intermediate_table
(
"name" varchar(15)
,"~name~" varchar(10)
,"~name~" varchar(15) -- Conflicting name. Already exists.
,"~~name~~" varchar(10)
)
Lock timeout errors on restful request should occur less often and the concurrency level of handling restful request is strongly improved. We have implemented a more fine grained locking principle when handling requests.
{note} Deadlocks or database contention however might still occur.
From this release on we will validate a relationship only if all the attributes involved in the relationship have a value. This in contrast to previous releases where we'd validate a relationship if one of the relationship attributes has a value.
The reason for this change is to have a uniform approach in validating relationships with overlapping roles.
As a side effect of this new approach you should create a record constraint for each independent relationship that is optional and involves more than one non primary key attribute. This record constraint should validate on either all columns having a value or all columns not having a value. If this constraint is violated you should either skip the row or set the threshold to zero.
If GREATEST is chosen as consistent time for a business rule helper from now on the calculation of the consistent transaction time will return the highest transaction time value even if one of the input entities has an undefined value for the transaction time.
A Rest get request will now consistently return all the business key attributes. In previous releases the business key attributes were only returned if the query could be executed on the cache. From now on we will return the business key attributes in Rest get request for queries executed on the database as well.
For example:
When a get request was issued on Entity
with a join on Attribute
where attribute.code
= statusCode
only entities with an attribute statusCode
were returned. But instead of only returning the attribute statusCode
all attributes of the entity were returned. From now on you will still get the properly filtered entities but the attribute filter is now correctly applied as well.
{note} This was a known and open issue and is now changed.
Get requests are pushed down to the database server if the cost of executing them on the Node server cache exceeds a given threshold. This will not reduce the overall elapsed time in returning the result but will prevent the node server from becoming not responsive as returning the result from the node server cache will block the server from accepting other requests for a while.
From now on the entity update to a CFPL target entity with a CFPL Business rule as the source entity will be executed even if a GDAL Crud is active on the target entity.
This enables initialising/loading facts to an anchor/context while in the meantime have the ability to update this derived context. However, no guarantees are given that conflicts may arise due to transaction time problems which might be caused by a CRUD transaction operating on current time while the business rule helper tries to update context on a transaction time less than the current time. To circumvent conflicts the business rule helper should only insert and not update or delete.
A GDAL delivery will be blocked (entirely) if at least one it's entities will be updated by a logical validation model delivery. If however a GDAL model only reads and writes to an entity which is updated by a business rule helper the GDAL will not be blocked.
Issue | Summary |
---|---|
[IREFACTORY-1289] | An invalid state transition error could sometimes occur. This was caused by removing a cached record while a pending flush transaction was still open. |
[IREFACTORY-1318] | Import of metadata went wrong when entities of a fact model were completed removed and a generic data access interface was imported as well. The error was caused by an not properly removing the cached relationship between entities. |
[IREFACTORY-1301] | An arithmetic overflow error was sometimes returned in the web app showing the delivery statistics. This is caused by SQL Server summing before filtering. The summed result did not fit in the integer datatype. Fixed by first casting the sum to a bigint datatype. |
[IREFACTORY-1279] | To be able to properly validate all set constraints on a logical validation model we constructed the set of rows from the rows delivered and the rows already available in the logical validation model from previous deliveries. In case the newly provided set of rows was a delta set the addition of rows from the existing logical validation model was not filtered correctly. |
[IREFACTORY-1276] | During the execution of the SQLServer installation script an error might occur indicating a primary key violation. This is fixed. |
[IREFACTORY-1319] | An invalid query path resulted on a Rest get request resulted in a 500 error instead of 409 error. |