How to test Encryption Context with ServiceNow Automated Test Framework

To improve upgrade and patch times ServiceNow offers automated test framework (ATF) to automate testing of various SN applications. Out of the box it comes with over 200 of automated tests and you can create as many tests as you want with pre-built test steps.

However not all test steps are covered by OOTB implementation, but you are given an option to create your own custom steps.

One of the common steps that is not supported via ATF’s is Encryption context.

Encryption context allows only users who are set to specific ‘encryption context’ to be able to view confidential data, for other users this data is encrypted at the database level and they have no access to it. This means that granting a role to a user is not enough, he has to manually click the encryption context picker and select the context that will allow him to see that sensitive data.

If your test relies on this, you will not see the data and your ATF will fail all further steps.

Changing encryption context can be done programmatically via REST API calls, however it can only be done via client side while ATF test runner is present on screen.

To achieve this:

1) Let’s create a service portal page named ‘encryption_testing’.

2) Create a widget that will do the REST API call and authenticate the user with document.cookie global object. You can choose to pass sys_id of encryption context via $sp.getParameter(‘sys_id’) server side or hardcode it in the client script. Use this for your client script:

data.sys_id = ‘sys_id_of_encryption_context’; // Get this from sys_encryption_context table

$http({
    method: 'PUT',
    url: '/api/now/ui/concoursepicker/encryption',
    headers: {
        'Content-Type': 'application/json',
        'Cookie' : document.cookie
    },
        data: { id: data.sys_id }
})

3) Create a module called “Encryption ATF Testing” which would lead to our portal page (can be sp or any other portal). If you used parameters value to indicate sys_id of the encryption context, you can add it as well to the module URL.

4) When creating your ATF test, use open a module ATF OOTB test step and choose “Ecryption ATF Testing” module.

How to increase performance of your Transform Maps – ServiceNow

Transform map is an easy and low code tool to move data from external sources into ServiceNow applications. It is in ServiceNow best practices list and is always recommended as #1 solution to your integrations.

The idea is to create a data source, choose a destination table in SN and map fields between them via drag and drop. Sometimes you can add some code if you have a more complicated use case.

However, this is an e example where low code fails miserably compared to pro code.

There is a problem importing data in ServiceNow is fast, but transforming data via transform maps is super slow.

Everyone who worked with SN noticed this already, but why are people still using transform maps?

If you want to put your apps on ServiceNow store, you have to become a technology partner by joining their program (TPP). You also have to complete this course and once you build an application it has to be verified by ServiceNow automated scripts that checks for best practices – including if you are importing data using transform maps.

If you tried to integrate ServiceNow with other tools, they usually have a connector you can download from ServiceNow store, however they are usually very slow or unusable because they follow these ‘best practices’.

Also, Transform Maps are still suggested by other developers as #1 choice as it is really easy to get started with and usually you do not need to code.

Based on how some third-party tool integrations are built and are available on SN store, I managed to improve data processing by ~610%, or more than 6 times faster (processing large daily CMDB dataset from 6 hours to 1 hour) by moving away from transform maps (and store apps of course).

So how do I ‘transform’ my transform maps?

  1. Create a table that extends import set row table – this will be a license free table, where you can store import information for troubleshooting purposes before data is processed.
  2. Move the data to new table via data sources, by pushing from external sources via TABLE/SR API, scheduled import or by pulling via internal scheduled script.
  3. Look if there are any available script includes to support what you are doing, for example CMDB Reconciliation and Identification Engine API is one of them or you can create one yourself using GlideRecord. This script should hold your ‘transform map’ logic.
  4. Create on insert or on update (depends on your integration architecture) business rule that calls your custom Transformation script include. If your design requires manual approvals or auto-approval rules, you can only enable transformation if these rules are met.
  5. After transformation is done mark the record as processed with the same business rule.

The only downside is that you own this solution now. ServiceNow will not support it, but you will. So make sure your code is well written!

How to download and use all node logs to improve instance health (system logs are not enough)

System logs do not provide all the useful information about your instance. Slow ACL’s, slow business rules and other issues might be important for your instance health and performance monitoring, however they are only found in the node logs. If you never browsed node logs, you will be surprised how many issues it can reveal. However, getting to all node logs is problematic.

If you’ll go to the “Node Log File Download” module and open any log file, you will only see “Download” button. If you press it, you will get a log file from a single node only – that node is the one you are logged onto.

Ok, that’s just one node. I just have to log in another node and download the logs one by one?

Kind of. This was possible prior to Madrid release, where you were able to switch to another node by modifying certain cookies. However, as of Madrid, it is more complicated than that, you need to decode IP and port values in certain way or use Chrome plugins that might not be allowed in your company (BIGipServerpool_${instance.name} value: ${decoded_IP}.${decoded_port}.0000).

We tried raising a case with HI support, and we were told: “It is not possible for SNOW customers to switch nodes and download individual node log files. If you need your node logs you will have to raise a request and HI support will do it for you”. So this is not supported by SN.

This is not ideal solution for us, as dealing with HI support is usually slow and inconvenient.

Downloading node logs is easy for on premises customer, they can directly log in to their application servers and download logs directly. But there is a faster way for them as well.

Luckily, some ServiceNow developer actually build a feature which is kind of hidden, not documented and not known, but it allows us to solve this specific issue.

  1. Go to “Node Log File Download” module and stay in the list view.
  2. Locate any node log and right-click it.
  3. Select “Download Logs from Near Nodes”.
  4. You are given an option to select multiple nodes and log date range.

Now, if you go to “Node Log Download History” module, all your previously downloaded logs will be saved as attachments for your future reference.

Also note, there are API’s that allows us to create scheduled scripts to scrape the node logs to search for useful information. This might allow us to supplement and automate our instance monitoring which is not covered by the system logs.

Grant non admin developers limited application and update set access

It used to be problematic to give limited access to developers (or other BA’s) from other teams. There wasn’t an easy way to limit access to change specific scripts or specific workflows, because everything used to be in a global scope or under a custom scoped application. Users would get access to everything or none, unless you wanted to spend a lot of time building heavy custom ACL’s.

This issue was solved by ServiceNow with Delegated Development. Now it is a good practice build everything under scoped applications, for example new features come out as their own applications (Agent workspace, GRC, Change management, Spoke applications).

Note, generally, based on new license subscription model, there are no license implications and you can build as many scoped applications as you want.

1) Go to sys_store_app.list where you can see all available applications.

2) Under related links click on “Manage developers”.

3) Choose a user and select limited access to only what they need from this application by application file type: script, workflow, service portal, update set access etc.

4) Now user will have access to application and update set pickers after they enable it from their developer settings. Also, whenever he will open any application file type, the filter with his allowed applications will be automatically set. Other files will be hidden under ACL restrictions.

Note: to enable configuration of update set rights you have to set up com.snc.dd.manage_update_set_enabled system property to true first. Users will have access to update set module and they will be able to choose update sets from there. To enable update set picker access set up glide.ui.update_set_picker.role property with the application role name.

Not limited to development activities, this could be used to grant access to users to modify their own catalog items without having access to full catalog.

Also, there is a simple way to change the scope programmatically:

$http({
method: 'PUT',
url: '/api/now/ui/concoursepicker/application',
headers: {
  'Content-Type': 'application/json',
  'Cookie' : document.cookie
},
data: { app_id: sys_id_of_application }
});

Call UI actions with REST using ServiceNow OOTB way – UPDATED

Note: this out of the box method is only available from Madrid version and up, if you are using prior versions click here.

Ever since Agent Workspace was introduced work has started to lose dependencies from interacting with Jelly and Java pages which led to introduction of GraphQL and other OOTB REST functionally.

One of them, which was previously only possible through some what complicated scripting, is calling UI actions.

A new way is simple and straightforward.

There is only one endpoint to do POST call for UI actions.

POST @ /api/now/ui/ui_action/${sys_id_of_UI_action}

Additionaly, there are same 3 mandatory body parameters:

  1. sysparm_table – table name of UI action.
  2. sysparm_sys_id – sys_id of the record you want to run UI action against.
  3. api=api (this has to be static).

If your UI action is designed to take additional parameters, you can pass them as well.

Note, if this is not working, you might need to set up Request header with Content-Type application/json and pass body parameters as following:

{
    "fields": [  { "name": ""  } ]
} 

Also, if your UI action contains client side only code, like g_form, it has to be converted to server side for this to work, because without user session being present, g_form and other client side API’s will be equal to null.

How ServiceNow is updating it’s Tech Stack and is using GraphQL

One of the biggest complaints ServiceNow receives is that it is running on old technology. It is true that most of ServiceNow is running on Java, Jelly, AngularJS, which comes with poor performance, but ServiceNow is working to change that.

New ServiceNow features introduced in Madrid and enhanced in New York comes with more modern technology stack.

  • The core component of newly designed pages is not a Jelly UI page anymore (it still is for Service Portal – $sp.do), but it is done with modern JS frameworks like React or ServiceNow JavaScript UI Framework. That means no Java and no Jelly.
  • To access other specific components, there are at least 50 new OOTB undocumented REST API’s to perform ServiceNow functionality which was only possible by old school HTTP requests only (history, favorites, breadcrumbs, filters, date, impersonation, UI action, etc.). This means almost all the required functionality is accessible in the front end.
  • However, not everything is upgraded to new stack yet, there are still some iFrames that call Jelly pages and some REST API’s are missing. Those too can be called using REST API’s, but it’s complicated.
  • More API’s and more components are coming to ServiceNow. A lot of performance optimizations happen when you move away from Java and Jelly.

But the natural problem arises: if we have so many different REST API’s to call to fill our page with everything we need, it becomes too complicated to continue developing and grow. We need some way to perform BATCH REST API operations.

Here is where GraphQL comes to ServiceNow.

As REST replaced SOAP, it is believed by some, that GraphQL will replace REST due to handy benefits:

  1. You only call one endpoint to get all the date instead of calling many – you do not need to call 20 different REST API’s to get all information into one page
  2. You only get the data you ask for and not what REST API developer wanted you to get – you only get what you ask for, so you can limit the transaction size to be as little as possible – hence performance increase

It iIt is also possible to use GraphQL for your ServiceNow integrations and ditch REST API’s.

  1. You only need to call one endpoint with POST for everything you need.

/api/now/graphql?api=api

2. All the parameters are in request body.

How do I call ServiceNow using GraphQL?

For example, if you wanted to get encodedRecord, sys Id and record values for specific incident:

We can also retrieve any other elements from the page (related lists, UI actions, formatters, etc.) or perform GlideRecord equivalent operations, like  isValidRecord, lastErrorMessage, canReadRecord.

To get all incidents, it looks more complicated, but way less complicated than the old way. Remember this is all done from client side and you have access to anything you ask for.

Also, as this is fully undocumented yet, I haven’t spent much time around to find the ideal queries. This probably could be optimized a lot.

What are other tools in ServiceNow to support GraphQL?

1 ) You can debug GraphQL execution by opening “Debug GraphQL” module. Session debugger was greatly enhanced with New York release, but is possible in Madrid as well the same way you debug service portal. More can be found here.

You can do some ‘mouse clicking’ in ServiceNow UI while session debugger is running and copy your GraphQL queries.

2) GraphQL subscription feature might enable you to build modern pub/sub message queues for client-side application solutions. Same as for AMB and AngularJS record watchers you could define subscriptions (ServiceNow calls them ‘Channel Responders’). With GraphQL you can do the same by utilizing sys_rw_amb_graphql_action table, which extends sys_rw_amb_action.

3) All transactions using GraphQL have “Batch REST” type and cannot be tracked separately. Logging can only be enabled at node level enabling advanced REST debugging.

How to cancel script, update or plugin rollback when it’s stuck and is breaking ServiceNow

A useful roll back and delete recovery feature was released to ServiceNow. With each script execution you can track all the database changes with possibility to recover everything with Rollback Executor.

It sounds very useful and seems like it will fix all your oopsies. However, it has flaws, does not always recover everything and might take days or weeks to finish.

One resultant flaw is that RollbackExecutor is being counted an as ‘upgrade’ activity. While it’s running you will not be able to use preview and commit update set functionality – “Update set preview and commit are unavailable because the system is currently upgrading. Click here for the Upgrade Monitor”.

As well as scheduled jobs with upgrade_safe=false parameter will not be executed while rollback is running.

Usual way to stop any transaction is by following:

  1. Go to “Progress workers”, locate the running worker and change its state to cancelled.
  2. Go to “Active Transactions (All Nodes)” or “Active Transactions”, locate the transaction you want to kill, right click it and select kill.

This is what you must do, however, both steps are not enough to kill RollbackExecutor process. You can verify it’s still running by going to xmlstats.do and looking for background_progress_workers:

<background_progress_workers count="1" size="16">
  <running_background_progress_workers count="1">
    <background_progress_worker created_on="2019-08-14 20:38:40" executed_time="19:21:08.698" processor="glide.scheduler.worker.6" progress_worker_record="ebbefa45db97bb00c3e6294d0b96196d" queued_time="0:00:00.154" schedule_job="e7befa45db97bb00c3e6494d0b961970" total_duration="19:21:08.852">Executing Rollback</background_progress_worker>
  </running_background_progress_workers>
</background_progress_workers>

Luckily, we can figure out next steps based on the node logs:

The upgrade system is busy because the GlideSystem is Paused

GlideSession message was modified by sanitization. [message=Update set preview and commit are unavailable because the system is currently upgrading. <a href="$upgrade_client.do">Click here</a> for the Upgrade Monitor][sanitized=Update set preview and commit are unavailable because the system is currently upgrading. <a href="$upgrade_client.do" rel="nofollow">Click here</a> for the Upgrade Monitor]

Now we simply use undocumented GlideSystem API functions.

First, we can verify if GS is paused with gs.isPaused();

Then we simply resume GlideSystem with:

gs.resume();

Finally, our rollback activity is cancelled and we can use our update sets and scheduled jobs!