With the release of i-refactory 3.0.0 we've made a variety of stability, performance, functional and usability improvements.
Feature | Summary |
---|---|
Physical deletes | On a logical model you now have the option to either logically or physically delete "removed" records. On a generic data access interface you have the option to "allow" for physical deletes. |
Cleanup violated constraint log | You can now delete the log of violated records and violated constraints for a delivery. |
Trusted/untrusted setting | A logical validation interface or entity can be marked as trusted or untrusted which will enforce the isComplete settings of a delivery. |
Task runs | We've introduced a new concept in our metadata model: TaskRun. |
When creating a delivery on a logical validation model we now support the option to treat "removed records" as either logical deletes or physical deletes. For deliveries on a generic data access model we now support the option to allow for physical deletes.
{warning} Physical deletes are executed in a single transaction and may have impact on system resources.
Instead of the default behaviour with logical deletes where we never physically remove data with the physical delete option enabled we do physcially remove data from the fact store. We do, however, only delete the contextual data. Business keys, registered in the anchors of the fact store will not be physically deleted but marked as such. With this feature you now have the ability to delete sensitive data if required to do so. We also still have a full audit trail of this feature which give you the ability to prove that a physical delete actual took place on a given point in time, for a given business key for a specific delivery of a third party.
{info} A delivery with physical delete enabled must be completed and persisted in the fact store before you can create a new delivery on a logical data model. This in contrast to a logical delete scenario where a new delivery can be created as soon as the data in the logical validation layer is accepted.
For a completed delivery you can now request to physically delete all registered logging in violatedRecord and violatedConstraint. The log of violations counts per constraint will not be removed.
You need to have the role of "DataManager" to be able to clean the log. The property delivery.logCleanupInfo will register the status of the cleanup process.
In a logical validation model we store the difference between subsequent deliveries. Our logical validation layer acts as an intermediate change log buffer allowing for parallel loading of deliveries. This buffer however also enables us to optimize the data load to the central fact store. Despite the fact that delivered entities are delivered as a complete set we do have the ability to process the delivered set of records as a delta load to the central fact store thereby significantly improving the load performance to the central facts layer.
We are also capable of reasoning if the data stored in a logical validation entity can still be trusted. If we do not trust the data anymore we will simply apply the isComplete setting of the delivered entity when loading to the central fact store.
However, in some situations, and more specifically in case of deliveries with context based enddating we are sometimes not capable anymore of correctly determining the trustfullness of the registered data of a logical entity in the logical validation model. This is only true if someone deleted or modified records.
So if someone (a DBA) decides to remove data from a logical validation entity and it is known that due to this action we cannot trust promoting a full delivery on a logical entity to a delta delivery when loading to the central facts layer we introduced the capability to mark an interface or entity in a logical validation model as untrusted. If the computed conclusion on entity level is false we will never promote a full load to a delta load but will honour the full load setting at the cost of decreased performance.
We've introduced a new concept in our metadata model: TaskRun. In our previous release we weren't capable of managing state other than an update of a target entity. With our new approach we now have the ability of executing many kinds of tasks and keep track of their state, composition and dependencies. An example of a specific non target entity related task we needed to execute in our previous release is the cleanup of intermediate storage used during the processing of a delivery. If this cleanup failed for some reason we needed to stop the iRefactory server because we couldn't register the failure state of these tasks.
We have changed the behaviour of determining if a record for a bitemporal entity is a removed record or a changed record. In previous releases a bitemporal record in the logical validation model was considered removed if you did not deliver a record on the valid time start anymore (or when specifically delivered in a delta delivery). From now on we will mark records as appended when a delivered timeline overlaps with an existing timeline. This behaviour was already implemented on the central facts layer. Due to the fact that we now support physical deletes in which we need to recursively delete overlapping timelines we needed to align the behaviour of handling bitemporal context in a logical validation model with the behaviour of handling bitemporal context in a central facts layer.
If a delivery is created where context based enddating applies we will filter rows delivered in a child entity which do not match with the parent context entity. For example: Given a datamodel with entities PART, SUPPLIER and PART_SUPPLIER where PART_SUPPLIER is a dependent entity on PART and SUPPLIER. And given a delivery with a delta load on PART and a full load on PART_SUPPLIER (partial delivery). And given the fact that the delta set for PART only contains a single record where PART.NBR === 1. And given the fact that the full set on PART_SUPPLIER contains a single record where PART.NBR === 2 && SUPPLIER.NBR === 1. We will filter the row in PART_SUPPLIER because it doesn't match with the delivered set on PART (no match between PART_SUPPLIER.NBR and PART.NBR).
This filter is required as it will or could result in loading errors on the fact store. And if a delivered record cannot be stored in the fact store we shouldn't store it in the logical validation model as well.
Despite the fact whether a reference column in a central facts entity was mapped to from a logical validation model we always executed a join lookup for each reference column.
From this release we will only generate a lookup to a parent entity if a reference column is mapped to.
If a default value expression is specified on an attribute of a central facts datamodel and no default value expression is specified on the attribute in a logical validation model which is the source of an attribute mapping we from now on do not apply the default value expression of the central facts datamodel attribute but we simply use the value as specified in the logical validation model.
If a constraint violation is detected on a record in a bitemporal table and this row should be skipped we will skip all other delivered bitemporal records belonging to the same business key.
The reason for doing this is that we cannot no longer trust the consistency of the delivered timelines if one of them fails.
The implicit skipped rows are not registered as constraint violations, neither in the constraint violation count, neither as a violatedRecord and neither as a violatedConstraint.
For set based constraint validation we currently append the delivered set of rows with the existing actual rows already stored in the logical validation layer. We append rows if:
In our previous release, for bitemporal context, we simply added rows with an equality lookup on the primary key.
From this release on now we will append rows for bitemporal data with an overlaps query.
For dependent entities prior to this release it was mandatory to register the non-reference primary key columns always after the reference related primary key columns. From now you can register these primary key columns on the desired position in the anchor. The code generator will resolve the key lookup properly.
Testing our software on SQLServer 2019 resulted in deadlock issues during CRUD transactions. This is caused by SQLServer merge joins where more rows than strictly necessary are locked in different order than another concurrent request.
We've added 2 hints on these queries: inner loop join and force order. This fixed the deadlock issue.
From this release on we will check on inconsistencies between several application dependencies.
Depending on the roles of the authenticated and authorised user menu items and actions will be enabled/disabled accordingly.
The look and feel of reference lookups is improved.
/acmDatadef/interface
and /acmDatadef/deliverableInterface
Removed properties:
New properties:
/acmDatadel/delivery
and /acmDatadel/activeDelivery
New properties:
/acmDatadef/baseEntity
New properties:
/acmDatalog/violatedRecord
You need to have the DataManager role to execute this request.
/acmDatadef/interface
You can set the property logicalValidationInterfaceTrusted on a logical validation model.
/acmDatacon/database
Ability to change the value of description.
/acmDatadel/delivery
New properties:
/acmDatalog/taskRun
Get data for a taskRun. A taskRun is the execution of a task: a load to a technical staging table, a set of constraints validations, a load to a logical validation table, dropping temporary tables, ...
/acmDatalog/taskRunDependency
Get data for a taskRunDependency. A taskRunDependency registers the taskRuns predecessors and successors and as such represent a dependency graph.
/acmDatadel/logicalStagingDelivery(...)/cleanConstraintViolations
Deletes all logging in violatedRecords and violatedConstraints for the given delivery.
/acmDatadef/baseEntity
Ability to change the value of logicalValidationEntityTrusted.
/acmDatalog/taskRun
Set properties of a taskRun: the statusCode, rowAfter, messageNbr, messageText.
The following Rest API calls are deprecated. We strongly encourage you to use the new endpoints instead.
Operation | Deprecated endpoint |
---|---|
PATCH | Deprecated:/acmDatalog/entityUpdate/ Instead use: /acmDatalog/taskRun |
GET | Deprecated:/acmDatalog/activeLogicalStagingDeliveryEntityUpdateFromExternalSource Instead use: /acmDatalog/activeLogicalStagingDeliveryTaskRunExecutedByUser |
The i-refactory runtime engine runs on NodeJS LTS version 10 and 12.
driverConnectionProperties
configurationAs a result of upgrading to the latest version of the driver which is responsible for connecting to the SQL Server database a small change in the configuration is required with regard to the connection properties.
The configuration regarding driverConnectionProperties:
{
"driverConnectionProperties": {
"server": "",
"userName": "",
"password": ""
}
}
Should be changed to:
{
"driverConnectionProperties": {
"server": "",
"authentication" : {
"userName": "",
"password": ""
}
}
}
httpRestApi
configurationOur Rest API requires a valid OAuth2 token in each request. For security reasons this token should be encrypted. Our Rest API server needs to know the location of the public key and the signature algorithm used to encrypt the token. You could use your own OAuth2 compliant server to grant access tokens (for example Windows Active Directory). To reflect these changes we slightly changed our configuration for our Rest API.
The configuration regarding httpRestApi
:
{
"httpRestApi": {
"enabled": true,
"openIdPublicKey": "crypto/openId.pem",
"https": {
"port": 3000,
"host": "localhost",
"key": "crypto/ssl.key",
"cert": "crypto/ssl.crt"
}
}
}
Should be changed to:
{
"httpRestApi": {
"enabled": true,
"https": {
"port": 3000,
"host": "localhost",
"key": "crypto/ssl.key",
"cert": "crypto/ssl.crt"
},
"accessToken": {
"publicKey": "crypto/key_public.pem",
"signatureAlgorithm": "RS256"
}
}
}
{info} If you choose to use the i-refactory OAuth2 server the generated publicKey should be encrypted with the RS256 algorithm. Our authorization server always use the RS256 algorithm to encrypt tokens.
uiServer
configurationTo eliminate confusion which kind of authorization server is used for our Web Application we've decided to change the property name opendId
to authorizatioinServer
. And we've removed the openIdResourceUri
property (which is a reference to the URI for requesting an openId token) because we do not need an openId token but only an valid OAuth2 token.
The configuration regarding uiServer
:
{
"uiServer": {
"enabled": true,
"https": {
"port": 3002,
"host": "localhost",
"key": "crypto/ssl.key",
"cert": "crypto/ssl.crt"
},
"apiUrl": "https://localhost:3000",
"openId": {
"clientId": "i-refactory-ui",
"authorizationEndPointUri": "https://localhost:3003/authorize",
"tokenEndPointUri": "https://localhost:3003/token",
"openIdResourceUri": "https://localhost:3003/openid"
}
}
}
Should be changed to:
{
"uiServer": {
"enabled": true,
"https": {
"port": 3002,
"host": "localhost",
"key": "crypto/ssl.key",
"cert": "crypto/ssl.crt"
},
"apiUrl": "https://localhost:3000",
"authorizationServer": {
"clientId": "i-refactory-ui",
"authorizationEndPointUri": "https://localhost:3003/authorize",
"tokenEndPointUri": "https://localhost:3003/token"
}
}
}
openIdServer
configurationTo eliminate confusion of the intent of the i-refactory authorization server we've decide to change the property name opendIdServer
to authorizationServer
.
The properties privateKey
and publicKey
no longer have a default value and should be specified explictly. This prevents security breaches where for example in our releases prior to 3.0.0 we created a private/public key pair and you accidently used this key pair without even knowing.
The last change is related to registering client application. A system client should be able to request an accessToken without an explicit login process. A system client should have the ability to provide it's username/password directly. In OAuth2 this grant type is called: clientCredentials
. For system clients you should therefore add the clientCredentials
value to the grants
array.
The configuration regarding openIdServer
:
{
"openIdServer":
{
"enabled": true,
"https":
{
"port": 3003,
"host": "localhost",
"key": "crypto/ssl.key",
"cert": "crypto/ssl.crt"
},
"clients": [
{
"clientId": "i-refactory-ui",
"clientSecret": null,
"redirectUri": "https://localhost:3002",
"grants": [ "authorization_code", "refresh_token" ]
},
{
"clientId": "exampleNonWebClient",
"clientSecret": "12345",
"grants": [ "password", "refresh_token" ]
}
],
"users": [
{
"id": "info@i-refact.com",
"username": "administrator",
"password": "abcd123",
"email": "info@i-refact.com",
"roles": [ "DataViewer", "SystemManager", "DataManager", "DataOperator", "Developer" ]
},
{
"id": "exampleUser@i-refact.com",
"username": "exampleUser",
"password": "12345",
"email": "exampleUser@i-refact.com",
"roles": [ "DataViewer", "SystemManager", "DataManager", "DataOperator", "Developer" ]
}
]
}
}
Should be changed to:
{
"authorizationServer": {
"enabled": true,
"https": {
"host": "localhost",
"port": 3003,
"key": "crypto/ssl.key",
"cert": "crypto/ssl.crt"
},
"privateKey": "crypto/key_private.pem",
"publicKey": "crypto/key_public.pem",
"clients": [
{
"clientId": "i-refactory-ui",
"clientSecret": null,
"redirectUri": "https://localhost:3002",
"grants": [
"authorization_code",
"refresh_token"
]
},
{
"clientId": "exampleNonWebClient",
"clientSecret": "12345",
"grants": [
"password",
"client_credentials",
"refresh_token"
],
"roles": [
"DataViewer",
"SystemManager",
"DataManager",
"DataOperator",
"Developer"
]
}
],
"users": [
{
"id": "info@i-refact.com",
"username": "administrator",
"password": "abcd123",
"email": "info@i-refact.com",
"roles": [
"DataViewer",
"SystemManager",
"DataManager",
"DataOperator",
"Developer"
]
},
{
"id": "exampleUser@i-refact.com",
"username": "exampleUser",
"password": "12345",
"email": "exampleUser@i-refact.com",
"roles": [
"DataViewer",
"SystemManager",
"DataManager",
"DataOperator",
"Developer"
]
}
]
}
}
Issue | Summary |
---|---|
[IREFACTORY-1584] | Cancelling a delivery while another one was active on the same logical validation model resulted in a server crash. |
[IREFACTORY 1552] | Deleted records are not removed from the cache in the NodeJS server. |
[IREFACTORY-1479] | Too many rows are added to the table for set based constraint validation from the logical validation model in case context based end dating should be applied. |
[IREFACTORY-1022] | Batch edit mode in the web UI should be disabled for a created delivery (you cannot edit an existing delivery). |
[IREFACTORY-1427] | Tedious request is not always released. |
[IREFACTORY-1509] | Filtering on threshold value in constraint settings in the web app doesn't work. |
[IREFACTORY-1533] | Loading delivery statistics is sometimes very slow. |
[IREFACTORY-1540] | Error when updating database description in web app. |
[IREFACTORY-1705] | The minimum number of connections to create in the connection pool was not honoured correctly. |
[IREFACTORY-1732] | Scheduler runs into already scheduled error in complex SIS datamodel. |
[IREFACTORY-1645] | Row count of delivered records in the delivery statistics not always correct. |
[IREFACTORY-1552] | Deleted metadata records in the NodeJS cache are not removed. |
[IREFACTORY-1644] | EntityId instead of EntityUpdateId was shown in monitor page of the web app. |
[IREFACTORY-1748] | Creating a new "database" record in the web app results in error. |
219 | Bi-temporal entity: the views for a bi-temporal entity could not be generated when the names of the attributes for the validity time line were not the same in the central facts layer and in the generic data access layer. |