Frequently Asked Questions

General

  1. What is FHIR?
  2. Where can I get help?
  3. Where can I get support?

SNOMED CT

  1. How do I add this SNOMED CT release Zip-file to Ontoserver?

Troubleshooting

  1. Indexing fails with an Out-of-memory error.
  2. HTTP OPTIONS requests are returning a 400 error but I thought Ontoserver supported CORS?

Installation

  1. How do I uninstall Ontoserver?
  2. What if I want to use a cache?
  3. My Ontoserver deployment needs to connect out through a proxy, e.g. to reach a syndication server. How do I configure that?
  4. I want to point my Ontoserver at a Postgres instance with custom credentials.

Maintenance

  1. How do I backup Ontoserver's data, including FHIR resources and indexes?
  2. How do I restore an Ontoserver from backup (or copy to a new container)?
  3. How do I update Ontoserver to a new version?
  4. How do I migrate my Ontoserver 4.1.x resources (Value Sets, Code Systems, Concept Maps) to Ontoserver 5.x?

Use

  1. How can I convert between a SNOMED CT Substance and an AMT AU Substance?
  2. I have codes like 3007274010 from my EHR system, but they do not correspond to any of the SNOMED CT codes I get out of Ontoserver, which has 703137001 instead. Why?
  3. Can I have multiple (business) versions of my resources? What happens if I do?
  4. How can I optimise performance of the RESTful calls.
  5. Will using batch requests help speed things up?

General

What is FHIR?

FHIR is an emerging HL7 standard.

Fast Health Interoperability Resources

FHIR Specification v3.0.1

Latest FHIR Specification


Where can I get help?

There is a very large, dynamic, and welcoming community of developers on FHIR Chat (Zulip) chat.fhir.org community expectations. This is the best place to go for general questions about FHIR, terminology services and FHIR, as well as Ontoserver itself.


Where can I get support?

For technical support issues directly related to Ontoserver, please email ontoserver-support@csiro.au.

SNOMED CT

How do I add this SNOMED CT release Zip-file to Ontoserver?

If you have a Zip file containing the SNAPSHOT or FULL files for a complete SNOMED CT Edition, then skip to step 2.

  1. You will first need to get the ZIP file containing the corresponding SNOMED International release that your Extension relies on, and combine the contents of the two files into a single ZIP file.
  2. Make sure your Ontoserver config includes file://var/synd/syndication.xml as one of the feed locations listed for the configuration property atom.syndication.feedLocation.

    For example, your docker-compose.yml may include the entry:

      environment:
        - atom.syndication.feedLocation=file:///var/synd/syndication.xml
    or
      environment:
        - atom.syndication.feedLocation=file://var/synd/syndication.xml,https://stu3.ontoserver.csiro.au/synd/syndication.xml
  3. Create the file syndication-sct.xml based on the following template; replace occurrences of:
    • MODULE with the Edition's module Id, for example: 999000031000000106
    • DATE with the Version's date, for example: 20190320
    • SHA with the SHA256 of the Zip file. This can be generated with shasum, for example: shasum -a 256 MySnomedCT_Edition.zip.
    <?xml version="1.0" encoding="utf-8"?>
    <feed xmlns="http://www.w3.org/2005/Atom" xmlns:ncts="http://ns.electronichealth.net.au/ncts/syndication/asf/extensions/1.0.0">
        <title type="text">Local Syndication Feed</title>
        <link rel="alternate" type="application/atom+xml" href="file:///var/synd/syndication.xml"/>
        <updated>2020-03-05T23:35:01+10:00</updated>
        <id>urn:uuid:145d7ab2-9e85-40fd-942c-c46fbc16c104</id>
        <generator>hand crafted</generator>
        <ncts:atomSyndicationFormatProfile>http://ns.electronichealth.net.au/ncts/syndication/asf/profile/1.0.0</ncts:atomSyndicationFormatProfile>
    
        <entry>
            <title>SNOMED CT</title>
            <id>urn:uuid:9f0e4007-16ff-4639-b4d9-cc6e9c7938f5</id>
            <author>
                <name>Leprechauns</name>
            </author>
            <rights>Copyright by Leprechauns</rights>
            <ncts:contentItemIdentifier>http://snomed.info/sct/MODULE</ncts:contentItemIdentifier>
            <ncts:contentItemVersion>http://snomed.info/sct/MODULE/version/DATE</ncts:contentItemVersion>
            <updated>2019-03-20T12:00:00+10:00</updated>
            <published>2019-03-20T12:00:00+10:00</published>
            <category term="SCT_RF2_SNAPSHOT" label="SNOMED CT RF2 SNAPSHOT" scheme="http://ns.electronichealth.net.au/ncts/syndication/asf/scheme/1.0.0" />
            <link rel="alternate" type="application/zip" href="file:///var/synd/SnomedCT_RF2.zip" 
                ncts:sha256Hash="SHA"/>
        </entry>
    
    </feed>
    
  4. When Ontoserver is up and running, execute the following two commands to make the Zip file known and available to Ontoserver:
    docker cp syndication-sct.xml ontoserver:/var/synd/syndication.xml
    docker cp MySnomedCT_Edition.zip ontoserver:/var/synd/SnomedCT_RF2.zip
  5. You can now tell the server to build the index by running (with MODULE and DATE replaced appropriately):
    docker exec ontoserver /index.sh -s http://snomed.info/sct -v http://snomed.info/sct/MODULE/version/DATE
    
    Note, you will need to ensure Ontoserver has sufficient RAM available (in the VM) and allocated:
      environment:
        - JAVA_OPTS=-Xmx14G

Troubleshooting

Indexing fails with an Out-of-memory error.

There are two possibilities here:

  1. Downloading and unpacking a pre-built BINARY index from a syndication server fails.
  2. Building a BINARY index from RF2 source fails.

In both cases you will need to adjust the heap available to Ontoserver. You can do this by setting JAVA_OPTS in the environment section of Ontoserver's docker-compose configuration. It should look something like the following:

    environment:
     - JAVA_OPTS=-Xmx2G
As shown here, 2 Gigabytes should be sufficient as a minimum heap size for pre-built BINARY indexes. However, to build a BINARY index you will need at least 12 Gigbytes, preferrably more. You also need to ensure that the Docker machine has sufficient resources allocated to support heaps of these sizes.
HTTP OPTIONS requests are returning a 400 error but I thought Ontoserver supported CORS?

Ontoserver does support CORS (see config of cors.allowed.*).

If you're seeing 400 series errors then you are probably not making a valid CORS request. Both Origin and Access-Control-Request-Method must be provided as HTTP headers for a CORS request to be recognized.

Installation

How do I uninstall Ontoserver?

There are two main parts to removing Ontoserver completely from your system: removing the docker images (which contain ontoserver and postgres), and removing the docker volumes (which contain the data).

You can see the docker images by running

docker images
You can uninstall them using a command like
docker rmi -f aehrc/ontoserver:5.4
Or, to remove all versions:
docker rmi -f $(docker images | grep 'aehrc/ontoserver' | awk {'print $3'})
You may also wish to remove the postgres docker image, for example by running
docker rmi -f postgres

You can see the docker volumes by running

docker volume ls
You can uninstall the docker volumes by running
docker volume rm $(docker volume ls | grep -E '_onto|_pgdata' | awk {'print $2'})

You may also wish to remove your docker-compose.yml file, or even to uninstall Docker itself (instructions can be found on the relevant Docker installation pages)

Note: these commands assume a default configuration. If you have made changes, e.g. to the volumes that are used, then the commands for uninstalling images and volumes may differ.


What if I want to use a cache?

Ontoserver is built to support standard HTTP caching using ETags and Last-Modified headers.

This means you can use standard front-side caches such as Apache, NGINX, Varnish, Squid, etc.

This GitHub project provides a sample deployment setup for Ontoserver using an NGINX cache.


My Ontoserver deployment needs to connect out through a proxy, e.g. to reach a syndication server. How do I configure that?

The following environment variables can be set in your docker-compose.yml file, in the environment section of your ontoserver container, to support Ontoserver connecting out through a proxy:

  • HTTP_PROXY_HOST (and HTTPS_PROXY_HOST): host of the proxy, e.g. proxy.mycompany.com
  • HTTP_PROXY_PORT (and HTTPS_PROXY_PORT): port of the proxy, e.g. 8080
  • HTTP_NON_PROXY_HOSTS (and HTTP_NON_PROXY_HOSTS): hosts that should be accessed without using the proxy, e.g. internalserver.mycompany.com

For example, you might have the following configuration in your docker-compose.yml file:

          ontoserver:
            image: aehrc/ontoserver:ctsa-5.8
            container_name: ontoserver
            depends_on:
              - db
            ports:
              - "8443:8443"
              - "8080:8080"
            environment:
              - JAVA_OPTS=-Xmx4G
              - HTTP_PROXY_HOST=myproxy.com
              - HTTPS_PROXY_HOST=myproxy.com
              - HTTP_PROXY_PORT=8080
              - HTTPS_PROXY_PORT=8080       
       

I want to point my Ontoserver at a Postgres instance with custom credentials.

Dataabase configuration comes from Spring Boot. The key parameters are:

  • spring.datasource.url
  • spring.datasource.username
  • spring.datasource.password
  • However, it is not good practice to put credentials in your docker-compose file. Instead, one can use profiles to pass in this configuration. For example, create a file application-jdbc.properties that contains the appropriate configurationfor the above spring.datasource parameters. Copy this into your Ontoserver container (you may want to create a derived Docker image for this), and then use the spring.profiles.active configuration to enable/select the profile. For example:

              ontoserver:
                image: aehrc/ontoserver:ctsa-5
                container_name: ontoserver
                ports:
                  - "8443:8443"
                  - "8080:8080"
                environment:
                  - spring.profiles.active=jdbc
           

    Maintenance

    How do I backup Ontoserver's data, including FHIR resources and indexes?

    Ontoserver's data consists of two artifacts that need to be backed up, both mounted as docker volumes.

    Note: These volume are mounted by the containers, but do not live inside the containers, so it is not sufficient to snapshot the ontoserver and postgres containers.

    • The pgdata volume mount in the db (postgres) container, which contains FHIR resources, downloaded RF2 sources, and lists of indexes
    • The onto volume mount in the ontoserver container, which contains the indexes themselves, as well as downstream syndication artifacts

    Once you have your docker client pointed at the docker-machine that is running the Ontoserver instance you wish to back up, and assuming your docker-compose.yml file is located in a myOntoserver directory):

    1. docker run --rm --volumes-from ontoserver -v /home/ubuntu/backup:/backup ubuntu tar cvf /backup/backup-ontoserver.tar /var/onto
    2. docker run --rm --volumes-from myOntoserver_db_1 -v /home/ubuntu/backup:/backup ubuntu tar cvf /backup/backup-pgdata.tar /var/lib/postgresql/data

    These commands create two tar files in the /home/ubuntu/backup directory of the machine (real or virtual) where Ontoserver is running. You may wish to retrieve them to another machine using a program such as sftp.


    How do I restore an Ontoserver from backup (or copy to a new container)?

    Once you have the backup tar files (see backup), and with your docker client pointed at the docker-machine where you want to restore Ontoserver:

    1. docker-compose up –d
    2. Wait until ontoserver is up (for example, run docker logs –f ontoserver and wait until it says Started Application in XX seconds)
    3. docker-compose stop
    4. docker run --rm --volumes-from ontoserver -v /home/ubuntu/backup:/backup ubuntu bash -c "tar xvf /backup/backup-ontoserver.tar"
    5. docker run --rm --volumes-from myOntoserver_db_1 -v /home/ubuntu/backup:/backup ubuntu bash -c "tar xvf /backup/backup-pgdata.tar"
    6. docker-compose up –d

    How do I update Ontoserver to a new version?

    If your docker-compose.yml file refers to a major-minor version (e.g. ontoserver:ctsa-5.7), and you want to upgrade to the latest patch version (e.g. from 5.7.0 to 5.7.1), then all you have to do is pull the latest 5.7 release. To do this, simply run

    docker pull aehrc/ontoserver:ctsa-5.7

    If you want to change to a specific version, then you can also change the specific version in your docker-compose.yml file. The list of available versions can be found here.

    Once you have done either (or both) of these, then you can apply the change by re-upping your docker-compose:

    docker-compose up -d

    This should recreate the ontoserver container with the new version.


    How do I migrate my Ontoserver 4.1.x resources (Value Sets, Code Systems, Concept Maps) to Ontoserver 5.x?

    If you are updating to Ontoserver 5.x from Ontoserver 4.1.x, then FHIR resources (other than SNOMED and LOINC code systems) in the database will not be migrated. This is because the structure of these resources has changed from FHIR DSTU 2.1 (Ontoserver 4.1.x) to FHIR STU 3 (Ontoserver 5.x). If Ontoserver 5.x finds these old resources in its database, then it will refuse to start up.

    If you wish to retrieve FHIR resources from Ontoserver 4.1.x so that you can migrate them, you can use the FHIR search (GET requests on /fhir/CodeSystem, /fhir/ConceptMap, /fhir/ValueSet) or read (/fhir/CodeSystem/[id], /fhir/ConceptMap/[id], /fhir/ValueSet/[id]) endpoints from a running Ontoserver 4.1.x instance to retrieve them.

    Once resources have been converted to STU3 resources, start Ontoserver 5 with an empty database and add migrated resources either individually using FHIR create/update endpoints, or as a bundle using the /api/addBundle endpoint.

    Use

    How can I convert between a SNOMED CT Substance and an AMT AU Substance?
    One of the new features in Ontoserver 5.0 is support for SNOMED CT implicit concept maps, and one of the implicit concepts that we have implemented (for SNOMED CT AU) is the substance-to-substance map, which maps from AMT substances to SNOMED CT substances.

    For example, to convert from 31586011000036103 | midazolam (AU substance) | to the corresponding SNOMED CT substance you would call:
    /fhir/ConceptMap/$translate?url=http://snomed.info/sct?fhir_cm=281000036105&system=http://snomed.info/sct&code=31586011000036103&target=http://snomed.info/sct?fhir_vs
    In this case, the URI that identifies the substance-to-substance is http://snomed.info/sct?fhir_cm=281000036105 since 281000036105 is the concept id for the Substance to SNOMED CT-AU mapping reference set (reference set).
    I have codes like 3007274010 from my EHR system, but they do not correspond to any of the SNOMED CT codes I get out of Ontoserver, which has 703137001 instead. Why?
    While this looks like (and, technically, is) a SNOMED CT code, it is not a SNOMED CT concept id. Instead, it is a description id. It can be identified as such because the second last character in the code is a 1, and not a 0.

    Ideally you should not be seeing these types of codes as EHR systems (e.g., Cerner) include a table that links each description id to the corresponding concept id and these are the only codes that should be included in data extracts.
    Can I have multiple (business) versions of my resources? What happens if I do?
    Yes. Resources such as ValueSets, CodeSystems, ConceptMaps and StructureDefinitions have version properties. When these resources are referenced using their logical URLs (e.g. type-level $expand, $lookup or $translate operations, referring to a profile in a $validate operation, or referencing a CodeSystem or ValueSet in a ValueSet.compose.include), and no version is specified (or if no version can be specified), Ontoserver will try to find and use the most recent resource with the specified URL. In order for this to work, all resources of the given type that share the specified URL must use a consistent version format. The supported formats are:
    • Semantic versioning ("x.y.z")
    • Date format ("YYYYMMDD")
    If the resources do not share a consistent version format, or if one or more resources do not have a version, then Ontoserver will report that it is unable to resolve the most recent resource.
    How can I optimise performance of the RESTful calls.
    It is very straightforward to set up caching of GET requests (POST cannot be cached per the HTTP specification). This answer provides more details and a link to a working example deployment.
    Will using batch requests help speed things up?
    The short answer is "it depends". More specifically:
  • Using batch means you have fewer HTTP requests between client and server, which may speed things up if your client is not using HTTP/1.1 keep-alive connections or HTTP/2.
  • Your client is running in a web browser and thus has a limited number (usually about 5) of allowed simultaneous requests; this limits the degree of concurrency you can get from the server.
  • If your server has a small number of CPUs / cores, then there is already limited concurrency available.
  • Using batch requests means using POST and thus your request cannot be cached.
  • The best advice here is to:
  • first ensure you have an HTTP cache in place (see What if I want to use a cache?),
  • then ensure the server has sufficient capacity to handle requests concurrently,
  • then measure current performance,
  • then if possible support HTTP/2, which allows for multiplexed requests over a single TCP connection,
  • then measure whether using batch speeds things up or not, making sure you factor in the impact of caching.