The PACS Server

New Features and Enhancements

Server Build 9.0.15

Download PDF

Run queue for disabled action 

When disabling an action, the action configuration page offers the option to process queued studies. The parameter is Process Queued Studies. When set to no, all unprocessed studies remain unprocessed. When set to yes, the system processes studies already in the action processing queue.

Actions batch size configuration 

The batch size field on the action setup page now enforces a limit of 100,000 studies.

Remove multi-hub cleanup tool 

Since server farms do not support hub servers, the multi-hub cleanup tool and its settings have been removed.

GUI feature to manage SSL certificates 

Users with Admin rights have access to the certificate controls on the Admin/Server Settings/Security Settings page. They include the ability to generate the SSL code signing request file and uploading the SSL certificate. Generating a CSF requires entering the required parameters in the popup curtain. When submitted, a CSF file is downloaded. Use it to obtain a certificate from your certificate authority. Once you have the certificate (.perm) file, use the upload control to install it on your server. Note that uploading a valid certificate results in a system restart.

Web viewer selection for users, groups and system 

If the diagnostic web viewer package is installed on a server, the admin can assign it as the default web viewer. The default web viewer is used when the web viewer is launched by any tool or application, including from the worklist, the technologist view page, and web services calls. To configure the system default web viewer, use the Web Viewer Type setting available on the Admin/Server Settings/Web Server page. The default viewer is the Series View viewer. The option Diagnostic option selects the diagnostic web viewer. For group and user assignments, use the Web Viewer Type setting in the Other section of the user and group configuration pages. Note that the Series View web viewer () and Diagnostic web viewer () have their own graphic icon:

Provide simple solution to extract/verify image on the backend 

A support tool is available to extract and decompress an image from a blob using the imutils process executable.

Multifactor authentication - SMS-based code delivery 

Multifactor authentication codes can be delivered via SMS messages. The system must be configured to use the Radar restful API. Setup is available on the Admin/Server Settings/Radar Messaging page. Once communications with the Radar system is activated, enable SMS as an authentication mechanism from the Admin/Server Settings/Security Settings/Use multilevel authentication setting. Once enabled, the phone number field in user account settings will become a required field. Phone numbers must be U.S. numbers and entered with nine digits and no other characters. If a phone number is provided, users will be able to include SMS as an option for the delivery of their authentication code.

Farm validator on startup prompts to continue after detecting errors 

When the user invokes the startup process from a terminal session, errors will be written to stdout and the user prompted to continue or terminate. When the system invokes an unattended startup process, errors are logged in rc.log and will attempt to continue.

Fix script shell references 

The shell variant declared in each shell script has been updated to specify the applicable shell tool.

Display thumbnails for cold studies 

When requesting a preview image for an unprocessed study, such as on the technologist view page or a web services client application, the system returns the thumbnail quality image from the processed repository rather than cooking the entire study.

Add access to old monitor data 

The date range field on the monitoring page permits selecting any date range.

Repohandler optimization: isDedicated() shellscripts 

To improve performance, the repository handler’s mount checking method was changed from shell scripts to C++.

Repohandler optimization: createDirectory to give back "already exists" for the resource 

To improve feedback, the repository handler’s directory creation method returns information indicating a resource already exists.

Suboptimality in creating log entries when saving an order 

To avoid unnecessary processing when creating log entries, numeric state values are not converted to text strings but presented as numbers instead.

Launch a third-party workstation via URL from the worklist 

Admins can configure worklist tools that issue a URL to a defined server. This tool can be used to launch a third-party application. The URL can contain patient and study identifiers. Configuring the URL is available on the Settings/Service Settings/External Applications page. Up to three separate tools can be defined. When enabled, each tool will be available from the worklist (floating) toolbar.

Mark studies that have dirty resources in the processing state 

If a study is in an incomplete storage state, the partially inaccessible indicator in the worklist toolbar changes to yellow.

Remove unnecessary debug logs 

To reduce journal activity, unnecessary debug logs have been eliminated.

Add log entry for full reprocess front-end action 

When the system receives a request for a full study reprocess or to clear the cache, the request gets logged.

Launch a third-party application via XML file drop from the worklist 

DEPENDENCY NOTICE: This feature requires desktop apps build 9.0.36.

The worklist can add a tool to launch a third-party application via XML file drop. The file contents can include data from the selected study. The configuration requires a third-party file drop entry in the external application configuration file (~/var/conf/speech_recognition.cfg) that references a file template (in ~/htdocs.main/data) and the folder in which the XML file is placed. Configure the worklist button from the external applications settings page, Setting/Server Settings/External Applications. Three separate tools can be created. Once enabled, an Open button appears in the worklist’s floating toolbar. When clicked, the XML file is constructed and copied to the configured folder on the user’s workstation. Each file drop request is logged under the log action label externalapplication.

Job summary misleading 

The command separator in the command string presented in the expanded task job was changed to semicolon (from backslash) to eliminate misinterpretation of the command list.

Share system settings across server farm - autocorrection matching criteria 

REVERSIBILITY NOTICE: This update removes local instances of the affected settings which will need to be manually restored if this feature is rolled back.

Additional system settings are shared so all servers in a farm access the same parameters. These include most matching criteria settings and prior relevancy settings. See Jira for the complete list.

Pass worklist column data to viewer as overlay info fields 

Any data value in the database (with a COLID) and defined using standard DICOM VRs can be sent to the viewer and become available for display as overlay data. The affected fields must be configured in the file ~/var/conf/getpbs.conf using the sendExtraFields parameter. The setting’s value is a newline-separated list of COLIDs enclosed in double quotations. When creating a customer configuration file, it must include the fields in the built-in default list (in ~/component/cases/cfg/getpbs.conf).

Optimize task creation in taskd 

Eliminated some unnecessary locking that caused delays when creating multiple taskd tasks.

Custom Fields validator improvements / review 

The custom field validator reports errors when attempting to save an invalid or unsupportable configuration. For example, if the entry contains neither a DICOM tag nor a field name, the validator reports the error.

Improve success rate for collecting taskd thread dumps 

The factory default java taskd start command includes arguments that will create a thread dump when an exception occurs.

Configuration backup delete log is ambiguous 

The log entry referencing a configuration backup purge has been updated to identify the backup’s date and time.

Make list and action save more robust 

The order of steps associated with a list update has been modified to safely handle the queues created by actions.

Add support for TLS 1.3 

TLS 1.3 is added and configured as the default cipher. Support for TLS 1.2 is available but disabled by default.

Web viewer enhancements: Hanging protocol presets 

The web viewer allows the user to apply and populate an image frame panel in place of the default series view panel. Available options include a 1-up grid and a 2-up grid. The frames can be populated automatically (the default is to auto-populate), provided there are enough series in the study to fill the number of cells.

Hungarian localization update up to 9.0.14 

The Hungarian language resource files have been updated to reflect version 9.0.14.

Dctransaction table purging should be configurable 

Transaction result codes are saved for a configurable period of time. The Purge Time of Transactions setting is available on the Admin/Server Settings/System page. The default is 24 hours.

Create tool to remove a study from the replica only 

The Eraser class used to clean up deleted studies has been updated to purge studies from a replica server. When the server is in replica mode, the tool will clean up the cache, processed repositories, database table, references, tasks, locks, etc., but will not remove the study from the DICOM repository.

Add cold-only option to reheat action 

An option has been added to the processing routines to check a study’s processing status and if it’s at least “cooking”, only process the data in the cache repository.

Checkoverload should not move/clean data when moveLimit is set to 100 or more 

Some storage devices can report greater than 100% usage. The check overload didn’t handle this. Now, the tool considers any usage value greater than 100% as full.

Create module to resolve templated data for a study or list of studies 

A new class has been created to parse templated data, specifically parts of coercion rule strings, into a defined structure.

Graceful stopping of actions 

The stop script checks whether all registered services are stopped and waits until the answer is yes before completing.

Review default cache (or cache-like) repo config 

Clearable repositories, such as the cache repo, default to being striped (switchlimit=0). This applies to newly created repositories only.

Remove reduced taskd from Streamserver 

Taskd is not used on the stream server and has been removed.

Farm validator should detect, warn and quit when started on non-App server 

The farm validation should always be run from the App server. If run on another server, the validator reports the error and terminates.

Remove "OrderManager::load runs" debug log 

An obsolete debug log was removed.

Bench timing mechanism improvements 

The bench timing framework has been updated to make bench timing tools easier to read, extensible, and available for both java and C++.

Ignore xSTUDYDELETED index in worklists 

To improve the efficiency of some worklist queries that are so vague that the deleted study index becomes the only effective index, the system ignores the deleted study index.

Optimize truststore load 

Unnecessary and redundant access to the key store file is reduced by loading the trust store into memory cache.

User Export option to include xml template assignments 

The export user tool includes each user’s XML template settings configuration file. The import user tool will import these if the user does not already exist on the target server. Existing settings can be merged with the imported settings using the import user tool’s -u parameter.

Missing taskd resources for epws SendReport tasks 

Resource limits for some web services functions, including sending a report, sending all reports and sending delete requests, have been applied. They are limited by the same resource setting used to restrict sending orders.

Taskd optimization - Info Collector 

The taskd information collector has been optimized by removing a sort on creation date.

Email change MFA verification 

When multifactor authentication is enabled, changing a user’s password, email address or phone number from the Preferences page triggers a verification event.

Avoid using non-secure session cookie 

Avoid creating sessionid cookies for non-secure connections when the server is set to use secure connections.

log4j debug logging may be called even if syslog debug logging is turned off 

A new script, $ASROOT log.asr, is available to enable and disable debug logging. This command applies to rsyslog, journald and log4j. Options exist to restore custom log4j configuration files from a backup folder if saved when debug logging was enabled.

Monitor the number of running threads 

Monitoring tools added to track the number of running threads in the system. These include number of taskd threads, number of Hermes threads, number of apache threads, number of tomcat threads, and number of overall medsrv threads.

Refactor medsrv code to share a common code base for v9 and cloud deployment 

The code base has been merged into a single branch that can build medsrv and cloud packages.

Unnecessary callback hook proliferation calls 

Proliferating callback log entries through the dotcom interfered with the callback logging on other servers.

Server-side support for DAP XML drop 

The server constructs and sends the XML file to the loader so the load can place it in the folder on the workstation.

Remove /ldap/export.ldif file 

Obsolete LDAP files and links have been removed.

Enable core dump creation on java VM crash 

The system configuration is set to create java core dumps by default.

Compatibility for SHARED_CONF with v8 

The proliferation feature is unnecessary in v9 but it is maintained in v9 for source code compatibility. If the script is used in v9, it returns an error.

Verify uploaded file's extension 

Files uploaded from the user interface are verified to assure the contents match the file extension.

Disable check for duplicate email addresses and phone numbers 

As a security measure, the system does not permit sharing email addresses and phone numbers across multiple user accounts. For existing servers that never enforced this practice or don’t use multifactor authentication, the restriction can be disabled. The setting, Admin/Server Settings/Security Settings/Permit same contact for different user accounts, can be changed to permit duplicate addresses and phone numbers. When enabled, click the yellow warning icon to download a list of user accounts sharing email addresses and phone numbers.

Append missing deviceId information to the WS Manual for ForwardStudy 

The eRAD PACS Web Services Programmer’s Manual has been updated to include a missing parameter in the forward study command details.

New user's Forward Targets default to user group's default 

The default user account setting for forward targets has changed from all devices to the group account’s default.

Remove unused jsp imadmin/customize.jsp 

An unused java page has been removed.

Indicate on tasks GUI if additional queued tasks exist 

If the actual number of suspended tasks exceeds the number displayed on the Tasks page, the total task count number will indicate the number displayed plus the total number in the queue.

Optimize query count on global restriction list caching 

Caching a user’s global restriction (access) list has been optimized by collecting the identifiers a single time.

The licensing pages require admin or support permissions 

After the license is first installed, access to the license renewal page is restricted to users with administrative or support rights.

Review streamserver logging 

Stream server logs have been cleaned up. Error IDs are logged, redundancies consolidated, study IDs added, info log level limits applied, debug log levels included, trace granularity improves, etc.

Add viewer bookmark handling support to streamserver 

Bookmark data is passed to and from the viewer through the steam server.

Add user/group limited viewer coercions 

Admins can assign viewer coercion rules to individual users and groups. Configuration is available on the user and group account edit pages, in the Viewer Coercion Rules field in the Other section. When initiating a viewer session, viewer coercion rules are applied in the following order: Preceding Global Viewer Coercion Rules; user-specific coercion rules, and if there are none, the user’s Primary group’s group-specific coercion rules; Trailing Global Viewer Coercion Rules. Note that user- and grou-specific rules are mutually exclusive, as stated. See the eRAD PACS Data Coercion Manual for coercion rules.

Make the default of SQLTimeLimit (Maximum query time) 15 seconds 

The SQL time limit is configurable and the default has changed from unlimited to fifteen seconds. The setting, SQLTimeLimit, can be defined in ~/var/conf/self.rec.

Optimize user ID collection when sending message to a user group 

Sending simple and broadcast messages to users has been optimized by eliminating the collection of unnecessary data.

Append the web service documentation with the logs table fields 

Details about the log table fields in web services messages have been added to the web services programmer’s manual.

User email address is mandatory to webservice users 

The email address requirement for web services accounts has been eliminated.

Add table rename function to SQLTables mechanism 

A mapping file has been created to define the SQL table mappings required during an upgrade. This file should not be modified by support or admins.

Web services command to launch the viewer with a study and its relevant priors 

The web services command to launch the viewer has a new parameter, Priors, the client application can use to instruct the server to include all relevant priors in the viewer session. See the eRAD PACS Web Services Programmer’s Manual for details.

Limit information in SQL error returned to user 

SQL error messages containing user identifiers have been replaced with a general error message. The original error message with full details is written to the server logs and available, if necessary.

Add more descriptive error handling to SDK 

The streaming protocol includes more descriptive error messages, including the affected file name and canvas.

Updates arriving from a peer server may result in Partial studies 

When objects arrive in reverse order from the acquisition order, as defined by the system acquisition time stamp, the blob keeps the object order consistent to avoid partially cooked studies.

Allow AND filter groups for per field filter string 

REVERSIBILITY NOTICE: Relevancy rules created using this feature will need to be manually edited to remove the enhancement.

The search filter and person name filter syntax has been extended to support an AND operation. See Jira for the updated syntax. Using this construct in relevancy rules requires a change to the user interface. This change is released separately.

"Break study lock" event should be logged into oper_info.log 

Lock breaking has a log action, breaklock, which can be displayed on the Logs page.

Show cache miss in logs 

Cache misses are recorded in the log files and can be seen on the Logs page using the log action cachemiss.

Optimize UserPreferences loading during report page rendering 

The report template page is loaded and cached to avoid having to load and render it multiple times.

Move server level user configuration file back to local config directory 

The server level user configuration file, user_config.xml, has been moved from shared storage to local storage.

Adding external DB server to the farm 

Registration and configuration of an external database is performed on the application server.

Create tools to investigate the StreamServer issues 

Support tools for monitoring the stream server exist. They include tools to parse the stream server logs into JSON format, load the JSON files and visualize the event timing, and stress test the stream server. Additional event logging has been included as well.

Extract real client IP when proxied in stream server 

When using a proxy or VPN, the real IP address of stream server clients is extracted from the connection headers so it matches the address used by application and web servers.

Support substandard DICOM studies 

Instead of rejecting non-compliant DICOM objects whose instance UID is not unique, the system creates a complex identifier that is more likely to be unique.

Sort alphabetically the monitor server dropdown items 

The entries in the server menu on the monitors page are sorted alphabetically to make them easier to locate in the list.

Configurable maximum PDU size to 131072 

The maximum acceptable receive PDU size setting permitted by the GUI has been raised to support larger values. The default has been updated to 131,072.

Server information page should remember its active tab 

The system remembers the displayed Server Information page upon refresh.

Eliminate autoretrieve latency/retries 

To avoid the possibility of a two-minute delay when initiating an autoretrieve, the determination task has been moved from Storescp to Dcreg. A configurable setting, autoRtvDelaySec in ~/var/conf/autoRetrieve.conf, is available to insert a minimum delay before initiating the autoretrieve process. The default is two seconds.

Improve OnStudyDbReleased tasks/processes 

Database updates were running sequentially while deleting a study, causing unnecessary delays for large studies (ie, many objects). Database updates have been moved to independent, collapsible tasks to improve performance.

Upgrade tomcat to 9.0.100 

Tomcat version 9.0.100 has been added to the installation package.

Enhance herelod logging 

Additional logging was added to herelod to detect a communication mismatch exception.

Include support for Segmentation Storage SOP Class 

DICOM’s Segmentation Storage SOP Class and Surface Segmentation Storage SOP Class have been added to the default supported SOP class list.

Make herelod more robust, so it avoids getting stuck 

Herelod refactoring performed to avoid a crash when terminating herelod reports some processes terminated successfully when they haven’t, to avoid a stuck thread when the servant exists successfully, and to check if a channel is stuck in a purging state before reserving it.

Retry DB connection creation 

To avoid scheduled restarts of the database, dropped connections are retried thirty times with one-second sleep intervals. These settings, retryCount and retrySleep, are configurable in ~/var/conf/db.cfg.

Extend reheat related logging 

Logging has been extended to include details about why a study ends up in a partially processed state, such as distinguishing between missing blobs and broken blobs, and including study UID and activity status in the log entry. Additionally, activity changes have been moved from the debug log to the info log for longer retention.

Improve success rate for collecting apache thread dumps 

Apache thread dumps can be enabled in apache ctrl (startup) to make thread dumps available for debug.

Performance issue with apache thread monitoring 

The monitoring method used to track tomcat thread counts could lock up threads when the volume is high. The solution was to replace the method with one using jstack.

Drwatson should not restart MySQL 

An automatic restart of MySQL when it is in ERROR state has been removed to avoid unscheduled inaccessibility to the database when transient error statuses occur.

Suppress commitment tasks when no device requires commitment 

When no device requires commit processing, the system suppresses commitment tasks.

Allow breaking own lock without break lock right 

DEPENDENCY NOTICE: This feature requires viewer 9.0.33 or later.

If a lock request arrives and the user has no lock breaking rights and the lock is owned by the user requesting the lock, the lock is granted.

Improve delete performance for purged studies 

A fast delete mode was added to improve the time needed to delete a study from the system. It is used when no references exist to any of the study’s objects.

Baseline server code base on Platform Rocky 8.10 

Medsrv runs on Rocky 8.10. An updated bill of materials and manufacturing manual are available.

Improve delete performance for partial deletes (keep report) 

The fast delete mode has been extended to be called when deleting a study but keeping the reports and key images. It is used when making space and in purge actions. To enable, set ActionDeleteMode to “quick” in ~/var/conf/self.rec.

Configurable delay for Study Updated WS notifications 

The system supports a configurable setting governing the frequency of study update notifications. The shared configuration file is $SHARED_CONF/common.cfg. The parameters are StudyUpdateNotificationCollapseMode and StudyUpdateNotificationCollapseSeconds. The default is to delay for four seconds.

Optimize EPLock table indices 

To improve the efficiency of data updates, the key field in the EPLock table is defined as a NOT NULL field.

Optimize herpa creation (allow reading blobs in parallel) 

To permit parallel reading of blob files, the reading threads use separate file objects.

Study in a bad state can cause an infinite loop and ultimate taskd crash 

If a collapse request was already scheduled but reporting a count of zero, and a study prep was completing, the system could enter an infinite loop which continuously allocated memory until depleted and taskd crashed.

Add info log to user viewer profile changes 

Adding new log entries detailing viewer profile saves and checksums matching checks.

Occasionally blobs get corrupted 

To avoid inefficient file system checks that could result in an invalid blob file, the system changed the way it checks for the blob folder.

Add more logging to taskd/ctrl stop 

Additional logging saved in the rc log and displayed on the console when stopping taskd, including pid number, interrupts issued, termination complete status.

Log returned data on the StudyQuery WS call 

To troubleshoot web service query issues, the study key value, if requested in the query command, is recorded in the log entry.

Enable the merging / copying of list-type profile elements as a single unit 

Added support to copy part of a profile element with data from the group profile. This was specifically necessary to copy the PHI fields from the group profile to an individual profile.

Handling of study key when opening webviewer via webservice 

The web service command to open the web viewer has been extended to accept study access keys as parameters. If no key is included in the command, the web viewer uses the user access information to create a key on-demand.

Performance degradation with blob creation robustness changes 

Error check performed when creating blobs flushed the data object file after adding it to the blob, but that resulted in a performance hit when using NFS storage.

ObsolescenceInfo handling being over-synchronized causes performance issues on NFS 

Calls to retrieve the last modified time and check for file existence were performed serially, resulting in a performance hit when using NFS storage. They are called in parallel now.

Skip retrying reheat when cookingdone finds broken blob 

When cooking detects a broken blob, retrying the cooking process is unnecessary and therefore not scheduled.

Remove synchronization bottleneck from date2str and str2date conversions 

The date formatting routines were used in synchronized blocks, which might result in a bottleneck and a performance hit.

Add some info logging to taskd's "from stopped state to java vm exit" process 

Additional logging exists to track the taskd activity after the process stopped until the java VM exists.

Enhance config file caching and TTL handling 

The configuration file handler was updated to use an optional time-to-live period for cached configuration files residing on shared disks.

Optimize repo directory deletion 

The repository handler deletes studies in parallel. The number of parallel threads is defined by deleteThreads. The default is 4 if the repository is shared. Otherwise, it’s 1. The number of child tasks a main task waits for is defined by deleteBatch. Both settings are defined in repository.cfg in the repository’s root directory. See the eRAD PACS Repository Manager manual for details.

Enhance createResource performance in repositoryhandler 

To enhance the performance of selecting a repository on which to store a new resource, the system caches the active mounts for a configured period of time. The time is assigned by the activeMmountTtlSec setting in the repository.cfg file.

Remove performance bottleneck in pb-scp "StudyPathCache" 

The data acquisition process sits in a lock and when parallel processes need to look up or create a study’s storage location, the lock blocks the cached lookups, slowing down the entire process. A configurable parameter, PbScpNoLock, is available in ~/var/conf/self.rec to enabled and disable whether the lock is applied system wide or per study.

Optimize needLog flag handling in registration 

A java cache has been created to cache flag files so the meta repository isn’t accessed if the flag file already exists, eliminating file contention and improving performance. These flag files are used to log study level auto-forwards.

Support Label Map Segmentation Storage 

Support for DICOM’s Label Map Segmentation Storage SOP and Height Map Segmentation Storage SOP classes have been added.

Optimize RefCounter indices/fields 

To improve the efficiency of database updated, all fields of the unique index in the RefCounter table are defined as NOT NULL fields.

Optimize license checking (study query date range) 

Fingerprint calculations have been optimized for larger sites where there’s enough data to establish usage characteristics in a smaller time window.

Use message id as a transaction ID when processing incoming web service messages 

If a web services client resends a message but does not include a transaction ID identifying the original request, the server treats the message ID as the transaction ID to avoid creating a duplicate request. The feature is optional. The setting exists on the Admin/Devices page. 

Server Build 9.0.14

Download PDF

Redesign checkoverload to handle large non-clearable repository mounts
For repositories that are configured non-clearable, the selection of movable studies has been changed from finding the oldest 100 resources to one in which any study older than a certain age can be moved. A new configuration setting, moveTime in repository.cfg, defines the age of studies that can be moved. Each time the process runs, it picks up where it left off, assuring the oldest studies will eventually get moved.

Add parameters for processing batchable actions
The Admin can configure how the system handles studies matching the search criteria but not included in the current batch and then not matching the search criteria during the next run. By default, the unprocessed studies are processed before newly matching studies. Alternative options include processing newly matching studies first followed by the unprocessed studies (from the previous search) and dropping unprocessed studies completely. The setting, Processing dropped studies, is on the action configuration page.

Share attachment repository amongst farm servers
UPGRADE NOTICE: Before installing this patch, the attachment repository, ~/data/attachment/generic/order/repository, must be shared across application, web and registration servers.
The attachment repository is shared between application, web and registration servers. Sharing details are available in the Shared Storage Requirements section of the eRAD PACS Manufacturing Manual.

Dcreg tasks can get stuck behind onStudyDbReleased tasks
To avoid a situation in which registration tasks get backed up behind slow, higher priority reference release tasks, a new resource, System.REFCOUNT, has been added. The default is set to 1, allowing registration and other tasks to run.

Notify user that one of the studies is partial
The Partial state has been added to the list of prepared states the server can use to notify the viewer of a study’s processed state.

Implement viewer session interface for the diagnostic web viewer
Communication between the viewer and the server is moved from HTTP from the application server to streaming from the stream server. This applies to the thick viewer and the diagnostic web viewer.

Media creator role
REVERSIBILITY NOTICE: Uninstalling this build requires manual removal of the role configuration and shared storage settings.
A media creation role, Media, has been created to assign MCS functionality to a farm server. Existing local and remote MCS modes and options exist. Settings are stored on shared storage between the application, web and media servers. Coercion has been added. Coercion rules are applied to the objects added to the media contents. 

Facilitate key based access from portal
To simplify account management from a web services client, access keys can be used when requesting to open studies in the viewer. See the openViewer.epw command in the eRAD PACS Web Services Programmer’s Manual for details.

Graceful stopping of checkOverload
When stopping checkOverload tasks with a kill command, including when using rc to stop the system, the task will run to completion and terminate gracefully.

On the farm's Admin/Tasks page, make the "THIS SERVER" line collapsible
All server task groups are collapsible on the Tasks page, including the application server’s task group.

Farm validator improvements
The farm validator has been updated to check that the IP address and server name match and the local and remote database connections are correct. It also added a verbose parameter to dump additional details.

Add the ability to call a specific action
Actions, actions.jsp, can be initiated from the command line with arguments identifying a specific action, a specific list owner and a specific list. Supported actions include Compress, Copy, Delete, Edit, Forward, Notify, Prefetch, Print Report, Reheat, Retrieve and Stop Sending Updates. Specifying just an owner will run all actions assigned to the owner. Specifying just an action will run the action on all registered lists. If no action arguments are specified, all actions are run.

Integrate rmiclasses into classes
The RMI classes code has been merged for better code management.

Password creation vulnerable to CSRF
A cross site scripting vulnerability has been eliminated.

Clean cache and processed before reheating from reheatStudies.sh
To avoid leaving unnecessary files in the processed directory, all files are purged when a reheat request is executed. In reheat actions, the purge is configurable using the Delete Mode option.

Create a Test task for time evaluation
A task, imagemed.taskd.TestTask, is available for evaluating task performance times.

Farm validator should detect tomcat.init files
The farm validator reports the presence of tomcat.init files in the tmp directory.

Callback hooks support for calling a script
An unpublished web services call, callbackhook.epw, is available for use in callback hooks to call a local script. Scripts reside in the ~/var/callback folder.

Keep thumbnail quality image data in processed for "cold" studies
REVERSIBILITY NOTICE: If uninstalled, studies need to be reprocessed (re-cooked) to avoid just-in-time cooking of the thumbnail images.
The processed repository (from previous PACS versions) remains in the system to persistently store the initial quality thumbnail images, denoted by their p.ei4 filename extension. This data is used by the technologist view page and other thumbnail-based views for uncooked studies.

Modify migration process mode for v9
The processing mode settings are available to define what to do with acquired studies. The setting, Processing migrated studies, is on the Server Settings/Study Operation page. There are options to generate processed data only, generate both processed and cache data, or generate neither. Note that image-less studies, such as an order, ignore these settings and will always be fully cooked.

Make checkoverload's night management's default lower value configurable
By default, the night move process starts with a limit of 5% and after a week, automatically adjusts the threshold based on actual volumes. But the starting threshold might be insufficient. As a result, it is configurable. A setting, nighLowerPercent in repositorypart.cfg, can be used to set the default. If not present, the default remains at 5%. For more information, refer to the eRAD PACS Repository Handler manual. 

Repository handler should not scan the mounts for location
A repository handler optimization has been extended to methods used with scanning mount points for details.

Do repository mount initializations and up-to-date checks in parallel
When initializing repository mount points or performing up-to-date checks, they are performed by multiple threads running in parallel. The number of threads is defined by numThreads in repository.cfg. The default is 64. 

When in Authoritative mode, avoid full repo scans
Initialization of the repo library included a full scan of the repository, which caused performance problems on systems with a large number of mount drives. Mount points are initialized when it is necessary to access them.

Avoiding parsing empty repositorypart.cfg files
Repository mount initialization skips mounts with an empty repositorypart.cfg file.

Avoid multiple full scans to the dicom repository when object table cannot be mapped
When multiple tasks need to find studies in an error state, the system performs the full scan once.

Attempt to auto-reheat partial studies
The active study manager will automatically reheat a study if the processing mode of an incoming object is higher than the state of the study, and when the evaluated state of a study after finalization is lower than indicated by the processing mode. 

Partial state to be indicative of pending tasks
If there are tasks in the queue, the processing state should be partial, even if those tasks apply to objects already registered in the blob.

Avoid accessing the dicom repositories when importing legacy data from origin
During wormhole import, the mechanism to restructure the meta info was called. This is unnecessary because the process is triggered on demand when the legacy data is accessed. When the meta data is located on slow (tier 3) storage, the call slows down the import process. 

Separate bookkeeping and resource deletion in Replica mode
When in Replica mode, a server’s repository handler keeps up to date but since it is read-only, it does not actually update the repository data. Under some conditions, such as when the wrapper cannot establish the location for a removal, it performed a low-level scan. Since it isn’t going to update the repository, this scan is unnecessary. 

OnStudyDirReleased suboptimality at replica servers
The Replica server tried to delete the study directory but when in wormhole mode, it is read-only. The request has been removed to avoid unnecessary processing. Additionally, a request to remove a file created as a workaround to a now-resolved solution has also been eliminated. 

Check server load and niceme metrics less frequently during whimport
The niceme metrics are collected once every defined period. The period is configurable with the CheckInterval setting in ~/var/conf/niceme.conf. The default is every five seconds (=5000).  

Stream server additional logging for study and cache initialization
Log entries have been created to mark the progression of study table and study cache initialization.

Add priors-only options to expanded actions
REVERSIBILITY NOTICE: Actions using the new settings need to be manually reset prior to downgrading. If not, the action setting will automatically be reset to the default.
Actions with an expanded list option include new options for processing the data. The setting has been renamed from Include priors to Process currents/priors. The options have been renamed and expanded. They are now Currents without priors (formerly None), Currents and relevant priors (formerly Relevant priors), Relevant priors without currents, Current and all priors (formerly All priors) and All priors without currents. 

Optimize WorklistResult usage to decrease query count
The query mode issued to collect all values for all fields is used when there’s a need to collect local priors, when collecting the content for processing actions, when exporting worklists and when executing a web services query request.

Saving PSg objects from the viewer takes long when there are many mount points
Submitting a presentation state to the server could take longer than necessary because the server performed a full scan of the repository. 

Improve reimporting Cooked migrated studies
Studies migrated into the Origin server might not exist in the Replica server, causing inconsistencies in the database and processed data. This change detects affected studies and objects and assures they are reprocessed and reheated as necessary.

Suboptimality in relevant prior processing rules
To improve the performance of prior match processing, when the current study does not match the prior matching condition, the relevant prior studies of the current study are not evaluated.

Rebrand PACS as DeepHealth PACS
The product labeling has been updated to use Deephealth labels and logos.

Pass study list to diagnostic web viewer
To support the viewer’s study add feature, the server must pass the prior studies to the viewer. The server now includes this information in the response to the getpbs request. 

Concurrent access to a mount's tmp directory causes performance bottleneck
REVERSIBILITY NOTICE: When wormhole processing is inactive, the queues and the mounts’ tmp directories need to be emptied before rolling back to an older version. When wormhole processing is active, the replica side is reversible as there are no DICOM objects sent directly to the replica.
To avoid a bottleneck accessing the shared data repositories, incoming DICOM objects are distributed across multiple tmp directories using a hashing algorithm. 

Improve logging in herelod
Herelod and herelod_if logging has been enhanced to include channel numbers to assist in correlating client and server entries.

The "Partially inaccessible" indication should be optimized
When evaluating a state that depends on the health of a repository, such as the partially inaccessible state presented on the worklist, the system gets the information from the database and no longer uses the repository handler to check the mount.

Optimize existence check in RHCache
Optimizations for managing the repository handler’s cache include making the synchronize block per source, and setting up an expiration time period after which the sanity check gets rerun.

Optimize herpa creation
Removed the check for thumbnail and processed files before collecting herpa data from the blobs.

Consolidate cache blob optimizations
The task responsible for consolidating blob creation has been optimized to remove inefficiencies when compacting multiple blob files. The task priority has been changed to allow prepstudy tasks to complete first and the final blob generation task is scheduled through the load balancer.

Option to ignore duplicate addresses and phone numbers when importing users
The import user script has added an option, -imfa, to ignore duplicate addresses, duplicate phone numbers and other details required in or to assure multifactor authentication is secure. Note that the warnings are still displayed on stdout but the data indicates the accounts were successfully imported.

Make access list cache parameters configurable
Access (aka, global restriction) lists are cached and the settings governing the cached data are configurable. The settings are in ~/var/conf/grCacheParams.cfg. They include GlobalRestrictionsCacheMaxSize (default=10000), GlobalRestrictionsCacheInitSize (default=100), and GlobalRestrictionsCacheMaxAge (default=5000ms).

Get user profiles from app servers for media to avoid sharing the user repository
To avoid the overhead associated with accessing the shared user repository, media (MCS) servers request the user profile data directly from the application server. 

Send Enhanced SR SOP Class objects to the viewer
The server includes the Enhanced Structured Report SOP Class objects in the data sent to the viewer so the viewer can apply the SR results.

Case-insensitive check of hostname in Farm Validator
The farm validator ignores the case when checking hostnames on farm servers.

Improve MCS resources to prevent a single media session hold up the others
MCS tasks specify a media session resource to avoid a single media creation task from using all the media threads and preventing other, smaller media tasks from running, The resource is System.MCS.Session.<mediasession> and is defined in ~/var/conf/resources.cfg. The factory default is 6 threads. 

Farm Validator should check cache repo to be shared on Media role servers
The farm validator reports an error if the cache repository on dedicated media servers is not shared. 

Farm validator should check cache repository sharing on Forward role servers
The farm validator has been updated to assure the cache repository is shared with Forward (role) servers. 
 

Server Build 9.0.13

Download PDF

Allow filtering for lists with actions 

Two additional columns are available on the Other Lists page to display the number of actions defined using the filter (¬# Actions) and the date and time the list was created or last modified (Last modified date). To filter on a specific action type, call up the context menu in the filter area and select Action Types to display a filter element for the action type. From the list, select the applicable action type and apply it.

Web services allow querying of task information 

Web services client applications can use the monitor command to retrieve system monitoring metrics such as the number of tasks in the task queue. 

Web viewer supports a Thumbnail panel 

The web viewer supports a thumbnail panel. The feature is disabled by default. To enable it, select the thumbnail panel tool on the options panel. When enabled, the series view is suppressed, the image area becomes empty image frames and the study’s series are displayed in a single row horizontally across the top of the frame area or vertically along the left side of the frame area. The thumbnail panel includes a study header containing study identification data followed by each series. If multiple studies exist in the web viewer session, they follow in succession in the thumbnail panel. Users can drag the series from the thumbnail panel into available image frames.

Allow document (attachment) upload via web services interface 

The web services library includes a command to upload an attachment to an existing study. See the fileUpload command in the eRAD PACS Web Services Programmer’s Manual for details. Supported file types include JPG, BMP, TIF and PNG. The system responds the same as if the file was uploaded using the GUI-based upload tool.

Improve import device tool 

The import devices tool, importdevices.sh, added two additional options. The update option, -u, overwrites existing device entries with the data in the data file. The duplicate AE option, -a, suppresses the duplicate AE Title warning and imports the device. Note that the Admin must resolve the duplicate entry manually after the import completes.

List size counts on the other lists page avoided the Query Qualifier 

The list size column on the Other Lists page could not be checked by the query qualifier and, as a result, caused expensive queries. To eliminate the bad queries, the column has been removed. To see the number of studies that match a filter, expand the filter’s row to see the list details. Note that if the query fails the qualifier’s criteria and the user does not have Restricted Query permissions, the Item Count field will show N/A rather than the list size.

Include accession number and patient ID in delete notification messages 

The web services delete notification message includes the patient ID and accession number. See the eRAD PACS Web Services Programmer’s Manual for details.

Expand admin capability to track and take action on weakly hashed passwords 

Admins are notified when user accounts with weak password hashes exist. To get an actionable list of affected accounts, add the Weak Password column to the user accounts page and filter for true entries. 

Device-specific relative priority control 

Admins can assign relative priorities to tasks spawned when data is acquired from specific (registered) DICOM devices. The DICOM device’s configuration page includes a Task Relative Priority setting for inbound and outbound tasks. When data is acquired, the resulting tasks are assigned the inbound priority. Manual and action-initiated forwards apply the outbound priority. Note that auto-forwards apply the inbound device’s priority to the forward task, not the outbound device’s priority. 

Check and remove stuck monitor locks 

To avoid stuck locks that happen if services are stopped while the monitor script is running, the monitor script looks for a lock file and if present, checks to see if the process that locked it is still running. If it is not running, the monitor process deletes the lock and continues executing.

Resent report notification messages to the RIS sent in the wrong order 

Reports and addenda resent to the RIS from the technologist view page are queued in creation order. If the send fails for one object and goes to retry, the reports can arrive out-of-order. An option exists on the device’s outbound messages configuration page labeled Send all reports together. When selected, all report components are sent in a single notification message. For details, see the new notification message, AllReportsNotification, in the eRAD PACS Web Services Programmer’s Manual for details. 

Handle all Secondary Capture Images similarly when calculating min SOP UID

When no modality-specific objects exist, all secondary capture objects, including single frame and multi-frame objects, are considered in the selection of the minimum SOP instance. 

Randomize and log retries of Java side exceptions

When the system encounters an SQL transaction rollback exception, the event is logged, the retry count is bumped up to 20 and the sleep time is randomized to avoid collisions.

Server Build 9.0.12

UPGRADE NOTICE: Upgrades affect existing data. See details below.

REVERSIBILITY NOTICE: Some changes require review if uninstalling this build.

The Web Service interface should be able to check if study resides on multiple mounts

Response messages to a study query request contain a field indicating whether the study resides on a single mount or multiple mounts. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Forward jobs should be controlled on a server farm

REVERSIBILITY NOTICE: If uninstalled, the Forward role assignment, if configured, must be manually removed.

A Forward role has been introduced. The server assigned the Forward role is responsible for all forward tasks, regardless of the source of the request, including manual forwards, auto forwards, forward actions, device auto forwards, etc., with the exception of forwards initiated in response to a DICOM retrieve request. Only one farm server can be assigned the Forward role. By default, the role is assigned to the Application server.

Provide way to "down" a farm server

Admins can down a farm’s registration or stream server from the Admin/Devices/Farm page. A server can be downed indefinitely or for a defined period of time. Downed servers remain active but the load balancer does not direct traffic to it.

Allow a 3rd party web services user to download only IQ images

A viewer client can instruct the server to return the low-resolution initial quality images rather than full-fidelity images using the QA token in the open command. Details are provided in the viewer interface developer manual. Optionally, the session can be configured to always return the initial quality image by setting the SESSIONLSY field in the session table to ‘1’.

Add the ability to get DICOM standard PS for viewer created PS objects

A new web services command, Get PS Object, exists to convert eRAD PACS’s presentation state data to DICOM-conformance objects and download them. Details are available in the eRAD PACS Web Services Programmer’s Manual. Additionally, a command line tool, convPS, is available to perform the same conversion.

Handling MySQL and JDBC retries - Java side

The remaining direct SQL queries have been converted to use SmartPreparedStatement and these queries have been optimized for reuse.

Selectable name filter format

The admin has the ability to configure the person name filter. The feature is configurable in the field label configuration page. The default person name filter is defined by the Use person name filter setting on the Admin/Server settings/Data formats page. It defaults to Simple, meaning a two-value (first + last) name. When configured to None, names are free text fields. The value Full uses the five-field DICOM-compliant name format. Name format settings can be assigned to individual name fields from the Customize Label configuration page.

RepositoryHandler should handle many mounts more effectively

DEPENDENCY NOTICE: To apply the optimization when running in takeover mode, the origin server must be running 7.2 medley-102 or later.

This is the wormhole-based solution needed to eliminate unnecessary searches across multiple mount points when looking for the location of a study. It suppresses study and order creation messages at the repository handler level, and when moving data, the mount location is included in the wormhole message so the replica server doesn’t need to search through all mounts.

Enhance mutexing of get/add/delete notes

All notes were managed through a single locking mechanism, even though notes belong to a single study. To remove delays adding, deleting and retrieving notes in the patient folder, each study manages its own note-locking mechanism.

Evaluate and robust V9 legacy double-leg/dirty studies tools/feature

When a repository is configured to use Authoritative mode, dirty and broken (i.e., multihub) studies cannot be cleaned up automatically. These studies are marked as dirty, so they can be identified and cleaned up manually, and the original study folder is returned to the caller.

Optimize deleteTasksForStudy in the study cleanup

The efficiency of the study cleanup tool was improved by filtering existing tasks using the study’s UID, rather than checking all tasks for related ones.

Multi-hub cleanup should not forward obsoletes to DR by default

The built-in default for propagating deletes when using the multi-hub cleanup tool has been changed to go to dotcom servers but not archives. The setting, Propagate delete from source hub to, can be changed on the Server Settings/ Multi-hub Cleanup page.

Increasing log rotation time

The log rotation time for the info.log and error.log files has changed to 30 days. The setting, LogRotateDays, is configurable in ~/var/conf/self.rec. Note that the logs subdirectory named “week” remains unchanged, even if the rotation time is not seven days.

Series level "delete immediate"

Users can delete series throughout the dotcom and allow it to be resent to the server (i.e., employ the “delete immediate” feature) when the Delete Mode setting on the Server Settings/Study Operations page is set to Delete Immediately.

Add Callback Hook Functionality to v9

The callback hook functionality has been restored. Settings are available on the Server Settings/Register Callback Hooks page. Administrators can define a URL to call when a matching study log event occurs.

Partial data takeover should handle actions automatically

When instantiating a new dotcom server, use the ~/var/conf/actionConvert.conf file to define the IP addresses of the servers whose actions are to be copied over. This allows exported users to retain their configured actions after the import. IPs and IDs not listed in the configuration file will retain their original serverid value and be disabled.

Object level "delete immediate"

Users can delete a select image throughout the dotcom and allow it to be resent to the server (ie, employ the “delete immediate” feature) by checking the “Immediate” box in the confirmation panel displayed when deleting an image from the Technologist View page.

Review declaration of MEM resource for tasks

Removed the system memory resource setting from Dcreg tasks and added it to MCS prepareObject tasks and for multi-frame ultrasound objects.

Launch diagnostic web viewer from worklist

A new worklist tool is available to launch the in-process web viewer. This tool is available only when the web viewer package is installed on the server. This feature is intended for testing purposes only and will be purged when the web viewer testing is complete.

Enhance logging of blob creation and finalization

Additional logging has been added to record details associated with blob creation.

Serve attachments via servlet

Presenting attachments stored as PDF files, particularly in the patient folder, could fail because the software expected the file’s extension to match the code page URL. A servlet has been added to support PDF files appropriately.

Server Build 9.0.11

Update tech view page to use web client SDK

The tech view page has been updated to use the web client SDK, which transfers frame data using streaming protocols rather than HTTP protocols.

WS operation to return key image information

When key images exist in a study, details are included in the ReportData section of the GetStudyDataResponseMsg, including a URL to return a rendered key image. For details, see the eRAD PACS Web Services Programmer’s Manual.

Store key images in processed repository

Key images are stored in the processed repository indefinitely to avoid the need to reheat the entire study when they are requested by a client application or displayed in a report.

Monitoring should send number data only, draw graph on the browser

The monitor page has been updated to receive the data from the server and draw the graph/chart in the browser. When a graph/chart is being generated, a progress bar appears at the top of the data area.

Load canned report templates on demand

When initiating the report page for the viewer or browser page, the server collects and submits the list of report templates and not all the template files. The full report template is downloaded on-demand when loading the report page.

Worklist csv export download notification

List downloads, including those from the worklist table, accounts table, log table, etc., are generated asynchronously and the user is notified when the data is available for download. The status panel with the download button appears next to the session menu.

Check viewer version from activity sign

The viewer submits its version number to the server in each keep-alive message. The server compares it to the user’s configured viewer version and notifies the viewer if they are different.

Latest viewer version returns incorrect viewer

Viewer versions listed on the user’s profile setting page have been sorted by version number so the viewer can identify the latest build.

Media export Information

The media export status panel displays patient and study identifiers for each export job.

Grant users with Admin rights access to study remove tool

The Study Cleanup tool used to fix studies that exist on multiple hub servers has been made available to users with Admin rights.

Add memory management to web viewer

The web viewer manages its memory usage to prevent it from consuming more memory than it needs or exhausting the memory available. When it loads data that exceeds its maximum (512MB), it starts releasing memory. The affected images will be redownloaded when necessary.

Web viewer should only render when the view changes

The web viewer refreshes images at a fixed rate but in certain environments, specifically CITRIX, when the processing was performed by a CPU, as opposed to GPU, this could burden the CPU with unnecessary activity. This has been modified to refresh images only when an animation or mouse event occurs.

Log and monitor reheats in a standard operation log entry

User requests to reheat a study are logged in the operation log file (and database). Additionally, reheat log entries are available to the monitoring tools and can be displayed on the system monitor page (when it includes registration servers).

Protect report's rich text content from pasting bad content

When users paste text containing iframe and script data, it can be misinterpreted. This information is stripped from data pasted into the report panel.

Framework to pass task context to other farm servers in intracom calls

The intracom framework has been updated to include the task context in its calls so when the target server is the calling server, the task can be grouped with related tasks.

Taskd should handle situations when MySQL is not working

If the database is not responding, tasks from the retry queue could go into an orphaned state and never complete. Now, these task threads go to sleep so they can be retried once the database service is restored.

Make ObjectForward tasks collapsible

When a user repeatedly issues a forward request consisting of the same data, the requests are collapsed into a single task and executed only once.

Add debug info to troubleshoot studies remaining in cooked state

Additional logging has been added to monitor cooking and reheating activities, including active study object dumps.

Allow/Ignore DICOM Q/R attributes included in a request below the Query Level

When a DICOM C-FIND request includes series-level attributes in a study-level search or image level attributes as a study- or series-level search, as defined by the Query Level attribute, the server ignores them rather than reports an invalid C-FIND request.

Server Build 9.0.10

Handling MySQL and JDBC retries

The remaining direct SQL queries have been converted to use SmartPreparedStatement and these queries have been optimized for reuse.

Server farm uses a single license

All servers in a server farm reference a single license hosted on the application server. If a server cannot access the application server or no application server is explicitly defined, the software will not run. Additional licensing errors and warnings are available in the license generation manual.

Support cache configuration where no moves ever happen

When the cache repository is clearable, there’s a configuration option, isClearableMmove, to enable and disable moving data. When “false” (default), moves will not happen during checkOverload. The data is deleted instead. Details are available in the repository handler manual.

Change herelo(d) to use dcmtk's openjpeg j2k implementation instead of jasper

The compression used by herelo and herelod has been changed to use the platform’s instance of openjpeg.

Make Monitoring Disk utilization dynamic

Disk monitoring tools can monitor the usage of all drives and partitions on which system data resides. The options on the monitoring GUI are defined using the mount’s label.

StudyRepositoryWrapper shall only manage repos that it is configured to manage

To avoid unnecessary overhead, managing local, reliable mounts use the raw repository manager instead of the advanced repository manager (StudyRepositoryWrapper). When using the raw mode, a runtime warning is logged.

Make hyphen available in userID

The hyphen character is accepted as a supported character for user, group, LDAP and document type IDs.

Add "running tasks" option to qst.sh

The queue status tool, qst.sh, includes an option, running, to display information on the running tasks.

Completed orders shall be openable from the GUI

Completed orders can be opened from the worklist into the main and web viewers.

There should be a MEM resource for Tasks

An independent memory resource is available to restrict the number of concurrent memory-consuming tasks, including ObjectForward, Dcreg and DcCompressDataTask. It applies to objects starting with “BT”. The setting in component/taskd/resources.cfg is System.MEM. The default value is three.

Origin-Replica migration tool: destination

A web service, ImportStudies, exists to register data received from an Origin server. The service creates the study and report database records, creates object or meta database records, updates reference counters, indexes the repository location in the database, and logs the activity.

Allow retrieval to continue despite local copy

Manual retrieve requests and the retrieve action configuration page include an option to override the system’s check for a local copy. The setting is disabled by default. If selected, the entire study is retrieved, overriding the local files, if present.

Extend streamserver logging capabilities

Stream logging has been enhanced to include information used to establish a thread’s affected connection/session. Additionally, the stream logging level can be configured. The logging level, LogLevel, is defined in the stream server configuration file, ~/var/conf/streamserver.conf.

Cross-version device export/import tool

Exported device data can be imported by systems running the same or future versions.

Manage dirty repositories in the wormhole synchronization

The Origin server notifies the Replica server of dirty studies, eg, broken studies existing across multiple hub servers, so the Replica can identify and acknowledge them accordingly.

When monitoring, distinguish critical tasks in the total

The task queue size information displayed on the monitor page separates critical tasks (those with a priority less than or equal to 200) from non-critical tasks. The new field labels are Task queue critical (retry, scheduled) and Task queue failed critical.

Replace strikethrough format for "out-of-fixed-list" values

The notation used to indicate an enumerated field’s value is not in the defined list has changed from red text with a strikeout to orange text underlined by a dashed line.

Move task management related logs out of info.log

Task management log entries have been moved from the info log to the taskd log file, var/log/javataskd.log.

Retire old monitor.jsp

The direct-access URL to the monitors page has been retired. Access the monitors page from the GUI, Admin/Server Information/Monitor.

Tasks page should be able to show number of "critical" tasks

The Tasks page includes the total number of tasks per queue and in parenthesis, the number of these that are assigned a critical priority value (ie, less than or equal to 200).

Further improvements for the reheatStudy script

The study reheating script, rehearStudies.sh, includes an option, -c, to include cold studies in the reheat operation.

Enable User layout calculated fields

Admins can create calculated fields for use on the user account page.

Add new factory default calculated field Last seen to User Accounts page

A built-in calculated field is available for the user account table to display the date and time the account was last used to access the system. The field is labeled Last seen.

Rescheduling script does not consider unprocessed when rescheduling

The script to reschedule suspended tasks, rescheduleSuspendedTasksDbase.sh, now considers unprocessed task files and unprocessed database tasks when reinstating suspended tasks.

Unage should prep (cook) prior studies

When the unage process is applied to a study, the system prepares it for use by generating the necessary cache and processed data.

Add index on MsgBroadcastToUsers.USERID

A database index has been added to the USERID field because the field is included in numerous database queries.

Change folder filtering on the worklist

Filtering out studies that belong to no folder is a special case. The folder field index should not be used in other cases because very few studies belong to folders.

Make memory assigned to Taskd java vm configurable

The memory available to the taskd Java VM is configurable. The parameter is TASKDJAVA_MAXMEMORY in the file ~/var/conf/taskdjava.cfg. The default value is 3GB.

Cache access list contents.

User and group access lists are included in numerous database queries (searching for account permission prior to executing a request), but they rarely change. Access list and compound list sub-lists contents are now cached for five seconds, for quick access with little or no impact on the database.

Minimize access to tier3 storage on study open

Tier 3 storage has few performance requirements and can therefore be slow. The stream server accessed data on tier 3 storage (e.g., meta data, reports) when initiating a streaming request. This access delayed the start of data streaming. The system now checks to see if this data exists in the cache, which is typically fast storage, before accessing it on the tier 3 device.

JIT reheats of a Cold/Frozen study shall occur at higher priority

Reheat requests resulting from a user’s requests to open a cold or frozen study jump to the top of the reheating (processing) queue and employ all available farm servers.

Distributing series of viewer certificates

Updated code signing certificates are stored on the server and made available to the viewer when requested.

Extend taskd with study activity tracking functionality

The time a queued task was last modified is tracked in the task database and available to system components.

Checkoverload logs number of resource folders found while searching for the oldest ones

The number of resource files checked when running checkoverload is included in the file info.log.

Checkoverload times logging

To establish a performance benchmark, the check overload process creates a time-stamped log entry in info.log after every 1,000 hashed folders have been checked.

Improve Origin-Replica data takeover at the Replica

Some improvements to increase takeover performance include reusing the database connection and prepared statements, using a larger buffer with buffered writing, and adding benchmark log entries.

Reuse native DB connection during data takeover

To avoid an unnecessary overhead creating new database connections, the existing database connection is passed to the repository handler when creating storestate.rec files in the meta repository.

Improve data takeover by processing import studies in background threads at the Replica

Imported studies from an Origin server are processed in multiple threads. The number of threads is defined by NumThreads in ~/shared/var/wormhole/import.cfg. The default is four threads per Origin (hub) server. There’s also a configurable setting to use a common database connection, UseCommonConnection. The default is true.

Exception in boost::filesystem results in JVM crash

An anomaly in the boost library (when run on Rocky) failed to report an exception when the software was unable to access a meta directory, resulting in tomcat crashes. The equivalent call to the system library is used instead.

RepositoryDataResolver should not try to resolve data repository when server is a Replica

The replica server is told where the data resides (by the origin server) and therefore should not attempt to resolve the data repository.

Do not attempt auto-correction when in Replica mode

While in data takeover mode, auto-correction is performed by the origin server. The late correction cron job has been disabled on a replica server.

Review role of reheat and reindex for various study states

A full reindex is initiated on a frozen study only when the study and object table and files in the data directory are not in sync. In other cases, such as when new objects arrive, a reheat is performed.

Optimize object table mapping

To optimize the performance of loading data from a data file (blob) into the database, it is performed in a single request using a prepared statement.

Add just-in-time study reheat function to frame to support legacy portal

When requesting thumbnail images (i.e., receiving frame requests) for a cold or frozen study, a reheat is triggered automatically. The request will be blocked until the reheat is done and images are available. If cooking takes longer than the request timeout, or if the blob is incomplete at the time the request is issued, no images will be returned.

Provide interface for streamserver selection via app serverDistributing series of viewer

A web services command, getStreamServer(), is available to determine which stream server a client application should connect to in order to download image data. Refer to the eRAD PACS Web Services Programmer’s Manual for details.

Server Build 9.0.9

Download PDF

Add usage metrics information to the streamer's logs

The stream server log, ~/var/log/streaminfo.log, includes transmission and reception metrics used by the monitoring tools to track streaming statistics.

Add monitor.jsp tool to Server Information page

The monitoring tools are available from the Application server’s Admin/Server Information page to users with Admin or Support rights.

Support Origin/Replica mode

Added support for Origin and Replica modes. An Origin server shares the state of the data/dicom repository(s) with a Replica server using proprietary notifications instead of formal communication mechanisms (such as DICOM forwards). Notifications are available to share a study’s storage location, indicate objects are created and deleted, and convey data repository activity. Includes support for new Origin and Replica device types. Use the device’s Ping command to verify the Origin or Replica device is available and configured to recognize the server. To enable Origin mode, set the Origin attribute in ~/var/conf/self.rec and restart medsrv. To enable Replica mode, set the Replica attribute in ~/var/conf/self.rec and restart medsrv.

Support Origin/Replica mode for the repo handler

Added repository handler support for Origin and Replica modes where the core repository handler notifies the Replica’s core repository handler when changes are made.

Support Origin/Replica mode for dicom object acquisition/update/delete

Added support for DICOM forward requests initiated on an Origin device by any activity, including acquisition, edit or deletion, made by the user or the system, to send a notification containing the shared DICOM repository on which the data resides to the Replica device, instead of initiating a DICOM forward. Additionally, an Origin device can accept and process edit and deletion updates sent from the Replica device. In most cases, the Origin device performs the operation and then notifies the Replica device to apply the same change.

Add repository management activity to the monitoring module

The monitoring tools include displaying cache, data, processed and other repository management activity (moves, deletes).

Add mysql connection saturation to monitored data

The monitoring tools include displaying the percentage of MySQL connections used.

Eliminate jdbc warning when invoking java from the command line

The reheatStudies.sh script no longer reports deprecated driver classes warnings.

Farm validator enhancements

Some enhancements have been made to the farm validator, including mounts on shared repositories are checked as well as the repository root; cache, data, meta, tempdata, processed, user and shared folders are checked on applicable farm servers only; failure messages are reported in the application server’s rc.log file; data repository sharing is not checked on servers running in Replica mode; and shared folder error messages indicate this detail rather than using a generic description.

Retire Post Process action and create Prepare Study action

UPGRADE NOTICE: Existing Post Process actions, if any, are disabled when action.jsp runs. Since prefetching has been retired, the Post Process action is unnecessary. It has been retired as well. A new action, called Prepare Study is available to reheat a study. The applicable action settings are available on the Prepare Study configuration page, available from the Other Lists table. When the action runs, details are logged in the info log.

Manage "dead mounts" more robustly against temporary database issues

When the database is unavailable, the system does not assume the mount is inaccessible and start creating dead folders.

Study status should not be cooked if tasks are queued on another registration server

The Cooked status applies when all objects acquired on any registration server have been processed. When a single registration server completes processing and detects another registration server has unprocessed objects, the study’s processing state is set to Partial.

Optimize db access in repohandler

The repository handler uses the existing database access facilities to avoid creating and closing new connections to the database every time the check overload function checks the repository handler’s dirty state.

Origin default user cannot be set to mandatory

When defining an Origin or Replica device, the Service User setting is required.

Webservice to provide study keys for third party clients

A token is used to indicate credentials have been verified for access to specific studies. Web services clients can request these keys using the StudyQuery command. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Webservice queries to support compound queries

The web services interface includes a StudyQuery command that supports compound queries, allowing an OR within the column field and an AND across the columns. For example, select all studies with a patient ID of X and a study date of Y. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Webservice call to query studies with priors

The web services interface has a StudyQuery command that includes an option to return a study and its priors, including those that don’t match the access restriction. The priors are uniquely identified in groups encoded in the results. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Webservice call to provide study location on the shared archive

The web services interface can return the location of a study on the data repository. The StudyQuery command can include an option to return the location on the storage repository. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Web viewer open should trigger reheat on cold studies

When the web viewer requests images for an unprocessed study, it generates them on-the-fly by initiating a processing event. As the images become available, they are streamed to the web viewer and displayed.

Server Build 9.0.8

Download PDF

Add object table cache handling mechanism

REVERSIBILITY NOTICE: If uninstalling, object table entries purged by this feature must be reloaded beforehand by invoking the touch scripts manually. To prevent the object table from growing indefinitely and storing large amounts of unused data, the system purges the least used records. When object data is needed, the system restores it on the fly from the study’s meta data (i.e., the blob). The time period data remains in the object table is defined by ObjectCacheTimeout in ~/var/conf/self.rec. The default is 10 days. Checking for and purging expired data occurs hourly from a cron job. The script is CleanupObjectTable. It can be invoked manually from the command line, if necessary.

Function to safely remove deleted study

This class of core functions enables support for safely removing deleted studies from the system via the GUI. They remove all remnants of a study from a server farm, including the repository resources, database records, task files, reference counters, locks and temporary files.

Create self-contained cw3/4 web rendering module

An SDK using javascript has been developed and deployed to enable web clients (browsers) to download and render cw3 and cw4 image files. Toolkit details are available in the eRAD PACS Web Client Image Library manual.

Cleanup deleted and bad-state studies

The Study Cleanup page includes the list of study records in a deleted state and a tool to remove them. The expanded row lists the related studies. Log entries exist for each study removed from this GUI feature.

Raise mysql connection limit

The built-in MySQL connection limit has been raised from 150 to 600. Additionally, the connection pool size has been increased to 32. See HPS-445 for subsequent adjustments.

Viewer-compatible localization of the user profile manager

The viewer configuration setting labels on the copy settings page use the customized resource labels employed by the viewer, making the labels consistent between the viewer and web page.

Build web client SDK as part of epserver

The web client SDK is compiled and packaged as part of the epserver build process.

Increase list filter expression database field length

REVERSIBILITY NOTICE: Filters exceeding 2048 characters created after installing this change will be truncated if it is uninstalled, generating unintended results. In previous versions, worklist and other list page filters were stored in files, permitting filter parameters of unlimited length. Since moving filters to the database, a filter length limit is imposed. This length has been increased to 32K. Attempts to save longer filters from the GUI results in a warning. Attempts to import longer filters during an upgrade will result in truncation and invalid results.

Possible repository handler caching issues

When accessing multiple image objects from the same study, the repository wrapper intended to efficiently manage repository access was mired in overhead (locking, database access, etc.) before it hit the cache manager. This was resolved by moving the cache management before the wrapper.

Convert invalid user preference value to default and save

Some user preference settings, including worklist poll time and web viewer dynamic help labels, were not converted to current values when the system was upgraded from v7. When detected, these settings are now converted to the system default value and saved automatically. Log entries exist indicating the system made these changes.

Additional performance info needed to benchmark reheat image tasks

Additional performance metrics have been added to analyze system performance when reheating studies.

Over-locking impacts performance when processing the same study on multiple threads

When multiple threads process the same object at the same time, a race condition could negatively impact performance because cache locking was at a global level when it could (should) be localized.

Improve reheatStudies script

Some enhancements to the reheat script have been applied, including better completion handling, using environment variables when available, cleaning up cache before starting the reheat process, and using relative priority assignments.

Server Build 9.0.7

Download PDF

Tool to validate the state and connectivity of all servers

The script ~/component/tools/validateFarm.sh is available to check the state and configuration of all farm servers. This script should be run on the application server. The tool is available from the GUI (Admin/Devices/Farm page) to users with Admin rights. The output lists detected errors, misconfigurations and invalid states. The output differs when medsrv (specifically, the hypervisor service) is running on all servers versus when it is not. See the Jira issue for which checks are performed based on the running state.

System initiated (automatic) series forward

Device-specific outbound coercion rules are available to filter series and objects when forwarding objects to registered DICOM devices. The feature uses the PROCESS control variable to indicate when to stop processing a specific object. When the variable evaluates to NULL(), the (forward) request for the affected object stops. Skipped objects are identified in oper_info and oper_error log entries. Outbound coercion rules are applied to objects after soft edit changes from PbR objects have been applied. GUI-accessible configuration panels are available on the Devices pages. Preceding and trailing outbound rules applicable for all devices can be configured on the Admin/Devices page. Device-specific outbound rules can be configured on a device’s Edit page. These coercion rules do not apply to forwards initiated in response to a DICOM Retrieve (C-MOVE) request. For instructions using the PROCESS control variable and defining coercion rules, refer to the eRAD PACS Data Coercion manual.

Create a support tool to reprocess all or select studies with keeping the original LRU queue

The script ~/cases/reheatStudies.sh is available to reprocess (reheat) all studies whose cached data files have a ReceivedDateTime before a defined date and time. The output lists all studies in the cache repository and whether or not they are processed or skipped.

Make list DB conversion more robust

Additional checks added to assure v7 user accounts are converted into v9 user accounts. This feature also permits applying the conversion process to converted accounts, if necessary. Remove the user account from the database and the account files will be reprocessed when the user logs in again.

Create script consumable output for hdclient printroles

The hdclient tool has a new argument, -s, that creates output in a computer-readable format.

Implement authentication on stream server

To enable the viewer to authenticate a user’s session, the stream server passes it the session ID.

Minimize moves in the repository by not insisting on deleting the oldest resource

Any cached study within a configurable range of time is considered purgeable when performing the scheduled (nightly) cache purging exercise. By default, the configurable range is 5% of the defined time range. Configuring the tool to 0% results in strict adherence to the purge time range, making it backward compatible with previous versions. The setting is deleteOld and resides in ~/repositorypart.cfg in the mount’s root directory.

Support sharing files amongst servers in a server farm

UPGRADE NOTICE: This enhancement creates a shared directory with two subdirectories if it has not been created prior to the install. In a server farm, these directories must be shared between all farm servers except the database and load balancer servers prior to starting medsrv. A shared directory, /home/medsrv/shared, must be created on each server, except the database and load balance, for sharing files between servers in a server farm. The directory requires two sub-directories, ~/tmp and ~/var. Details for creating the new directory are in the Shared storage requirements section of the eRAD PACS Manufacturing Manual.

Store rendering parameters along with images

UPGRADE NOTICE: All cached data needs to be reprocessed to insert additional information into the data files (blobs).

REVERSIBILITY NOTICE: Reprocessed cache files contain additional data that is incompatible with older versions of the software.

Rendering parameters for all clients are stored with the pixel data in the server cache files (blobs). Existing cache data needs to be reprocessed to add these missing details. This new file format is indicated by the.ei4 file extension.

Allow manual override for repository mount's isDedicated flag

UPGRADE NOTICE: To avoid unnecessary space calculations, this new setting should be manually created and net to “true” for any repository whose root and first mount is a single file system. If a repository’s root and first mount is a single file system, the system unnecessarily calculates the size of the repository every night when making space. To avoid this, a configuration setting, forceDedicated, in the repository’s repositorypart.cfg file is available. When set to “true”, the space checking script skips the size calculation for the associated repository.

Server Build 9.0.6

Handle delete immediate and nuked flags

UPGRADE NOTICE: This feature introduces a new repository called ~/data/tempmeta.repository for storing nuked flags and related files. (See Jira issue for affected files.) It is created during medsrv start. The repository must be shared between all farm servers.

REVERSIBILITY NOTICE: Data in the tempmeta repository is not recognized by previous versions, resulting in invalid data states if downgraded.

Support for deleting studies in a v9 server farm has been completed, including access to the delete and nuked state across multiple registration servers, support for partial deletes from the application server, purging from storage devices (delete immediate requests), and deletes in PbR objects received from external devices.

Detailed task logging should be configurable

A configuration option is available to disable running time calculations in log entries of successful tasks. When INFOLOG_SECONDS exists in ~/var/conf/taskd.conf, running times are suppressed if the task completed successfully within the defined number of seconds. Running times of failed or retried tasks are included in the entries regardless of the configuration setting.

Inherited user preferences/settings - GUI configuration

Group and system default settings are configurable from the GUI. The configuration page is accessible from the Preferences section of the Admin/Server Settings/Web Server page. Select the source account and then define one or more target accounts. Assign settings by checking the box in the settings section. Only checked settings will be copied to the target account(s). Use the search field to find a specific setting. (The section will be expanded.) Click Toggle Summary to review the changes to apply. Click Confirm to apply the changes. When done and after applying changes, close the panel by clicking the Cancel button in the bottom, right corner.

"Converted invalid worklist polltime value" log message spam

When certain system configuration settings contain an invalid value, the built-in default value is applied, a message is logged in the log file (maximum once per day), the administrator is notified via a message in the GUI messaging tool, and if encountered during startup, a warning is written to stdout.

Automatically manage isShared setting for repositories, Phase I

Since a server knows which repositories are local, the software can manage the sharing setting for them. To prepare for identifying local repositories and configuring them as not shared, the default shared setting for all repositories is set to true, eliminating the need to manually configure each one individually.

Server Build 9.0.5

Support WS API call to prepare (cook) the study

Web services command PrepareStudy() is available to process and cache a study on the PACS system. See details in the eRAD PACS Web Services Programmer’s Manual.

Additional output for user and device import/export tools

UPGRADE NOTICE: The output of the import and export devices tool’s listing option has changed. The device import and export tool list option, -l, dumps the device’s configured DICOM services. The device import tool supports a new command line option, -s, to list the devices configured with workflow triggers (autortv, autofwd, etc.) The user account import tool supports a new command line option (-a) to list the accounts with enabled actions.

SessionException server log is not informative enough

Server error log entries for session exceptions include the cause statement and the stack trace data.

Handling MySQL and JDBC retries - Java side - Part I

Database calls initiated from Java code use thread-local database connections to support retries.

Quick compatible fix for the cw4 compression error

A temporary fix has been applied to gwav4 compression to limit the frequency band traversals to five bands, making it similar to gwav3 which does not exhibit the data overrun condition. Note that affected studies (ie, those with the overrun condition) must be reprocessed.

Server Build 9.0.4

DEPENDENCY NOTICE: Dependencies exist. See details below.

Separate streamserver component - interface to load balancer

The streamserver component can be assigned by the load balancer.

Port and deploy websocket probing tool

A new tool, ~/component/dcviewer/bin/websockcli, is available for testing the availability of the web socket port. The tool must be invoked using a fully qualified websocket URI.

Ability to observe all tasks running across the system in central location

The Tasks page on the web (application) server displays tasks for all servers in the server farm. Tasks from the server displaying the page are displayed by default. Tasks from other servers are displayed collapsed and can be expanded by clicking the top line of the server’s section. Independent task filtering is supported.

Global rc start doesn't show when a server doesn't start

When invoking the global rc start command, no output was generated on stdout, making it difficult to see what started and what conditions, if any, exist. Now the tool displays the output from each server included in the global startup. The output is grouped by server.

Disable select batch worklist action tools

When batch-selecting multiple worklist rows, the split study, scan, upload attachments and technologist view tools are all disabled. When batch selecting all worklist rows and when selecting a combination of orders and studies, all three open tools are disabled as well.

Identify cache state on worklist in a WS client

The web services interface supports retrieving the cache repository state of a study. The field, Preparing Status (CPST), is available in GetStudyData and Query responses. For details, see the EP Web Services Programmer’s Manual.

Password field on Password Reset page is limited to 16 characters

The password field on the password reset page imposed a limit that did not exist on other pages. All pages permit assigning passwords of unlimited length.

Fill study edit page dropdown lists with distinct database values

Items in selection lists on the study edit page include values stored in existing study records as well as the list values defined by the field’s configuration when the field’s settings (editable from the Customize Labels page) have Limit selection to List Values checked and Is strict enum unchecked.

Add transparent proxy support to haproxy configuration template for DICOM

The proxy server is configured to use transparent proxy mode by default.

Track cache blob changes during viewer session

DEPENDENCY NOTICE: This feature requires viewer-9.0.4 or later.

When the contents of a blob in global cache changes, the viewer gets notified so it can decide whether or not to reload the image data.

Server Build 9.0.3

Include blobtest command line tool part of the deployment

A tool to manipulate blob files, ~/component/imutils/bin/blobtest, is available for use from the command line. Invoke the command with the --help argument for usage information.

Viewer profile checksum

The viewer adds a checksum to the profile when saving it and the server calculates a checksum and assures it matches the submitted checksum before it overwrites the saved profile. When the viewer requests the checksum from the server for validation, the server sends the calculated checksum.

App server should call Reg server to run DCReg

UPGRADE NOTICE: The temporary DICOM storage folder has moved to the repository root. Registration processes initiated by the application server are redirected to the registration server using the intracom service. This feature includes a change to the temporary DICOM storage folder. When the DICOM repository is configured with no mount points, DICOM files are placed in the DICOM repository root folder, ~/data/dicom.repository/tmp (instead of ~/data/tmp). This makes the process consistent with handling repositories with multiple mount points and makes the data created by the application server accessible from the registration server(s).

Disable jit image creation from techview

To avoid unnecessary error messages in the log, jit image processing has been disabled (temporarily) when loading an unprocessed study in the technologist view page.

Support opening non-cached ("uncooked") studies - back end

In order to notify users that the study they are attempting to display is unprocessed, the server needs to check the processing status plus the state of scheduled processing tasks. Once it has the state information, it provides the information to the calling entity so the user can be notified of delays caused by the just-in-time processing effort. An additional interface exists to allow the viewer to monitor the number of processing tasks so it can report the status as it completes.

GUI to restore viewer profile from backup

Administrators can restore a user’s or group’s viewer profile from the available backups using the Profile Backups page available from the user and group accounts page’s Manage Viewer Profile tool. The admin can create, delete and restore backups created by the system and user.

Framework to communicate among servers in a server farm

An interface framework (component) has been added to pass commands and jobs to the server performing a role that it itself does not provide, or to balance the load across multiple servers performing the same role. The component is called intracom. It uses port 4651, which can be overridden by INTRACOM_SERVICE_PORT in ~/etc/virthosts.sh. It starts the intracom service which accepts and services gRPC requests from other servers in the server farm. This service is currently started on application and registration servers.

Inbound filtering based on coercion rules

Control variables have been added to the (inbound) coercion rule command library. Control variables start with an ampersand (@)and use upper case characters. A single control variable has been introduced: @PROCESS. If the rule assigned to the control variable evaluates as NULL, (storing, forwarding, etc.) processing with stop. A log entry is registered indicating this. For all other results, processing continues. Note: at this time, control variables are recognized by pb-scp only. Refer to the eRAD PACS Data Coercion manual for details.

Device-specific selective autoforward (sync) feature

The device auto-forward setting instructs the system to send all objects acquired from third party devices to it, except for objects the device sent itself. Updates to objects are also sent (i.e., objects applicable to the “keep sending updates'' setting.) The limitation is new data generated by the system for a study that originated from the configured device will not be sent to the device. A feature has been added that instructs the system to auto forward everything it did before, plus any object created on the system. In this way, presentation states and secondary capture objects created by the user and added to the study will be sent to the device from which the study originated, assuring both systems have the same collection of objects at all times. The setting is available as a checkbox labeled Sync in the DICOM services/settings section of the device edit page.

Server Build 9.0.2

Separate stream server component

The stream server component has been modified to run independently of other medsrv components. Stream server devices are assigned streaming sessions in a round-robin fashion. As a result, for a given session ID, the same stream server is presented so the viewer can reuse existing connections, when possible.

Separate ingestion server component

Data ingestion has been separated into a dedicated role and dubbed the Registration server as part of the baseline framework effort.

Design and implement the revised "Processed" storage

Data processing has been overhauled as part of the baseline effort to minimize iops by storing data as blobs in single files.

Design and implement the revised "Cache" storage

Data caching has been overhauled as part of the baseline effort to minimum iops by storing cache data as blobs in fewer files.

Review and redesign the DB schema

The database has been overhauled as part of the baseline effort to eliminate inefficient and unused fields, store new data such as a study’s processed state and repository location, and support for object information that existed in the retired object table.

Optimize SQL database access implementation

As part of the overall refactoring, connections to the SQL server persist. The framework caches prepared statements for reuse.

Handle the situation when study resides on multiple mounts in the data repository

This is the application of the repository handler’s new middle layer for tracking the state of meta data in the repositories and handling the existence of data on multiple repositories.

Upgrade poco to the latest stable version

Poco version 1.11.2 is installed.

Avoid blocking for non-responsive network storage

When a networked storage device is unreachable, access requests timeout and the device is taken offline so subsequent requests can complete. While offline, access requests to the device are ignored. The system backs off for five minutes, checking the device after each period until it is back online.

Retire obsolete components

Obsolete components have been removed from the code base, including applet, pref, ct and pcre. Some medsrv components have been obsoleted in favor of the platform component, including curl, boost and openssl.

Rewrite Customize Labels page from jsp to GWT

The Customize Labels page used to customize the database has been updated to use GWT and adopt a look and feel similar to other web pages. All existing features remain, including the ability to configure individual settings for most database fields and the ability to create and modify calculated fields. Some minor differences exist as a result of changes to the associated feature, not because of the update to GWT. See the user help pages for details.

Enumerated filters should support free text search

Worklist columns defined as enumerated lists might contain values not present in the configured list of values. A free text field is available in the filter panel so these values can be entered as search criteria.

Drag and drop of multi-value filters

Multi-value fields such as Modality allow filtering on multiple values. Users drag the value into the filter panel. Individual values are separated by backslash characters.

Track study process state across the system

A study field, PROCSTAT, has been created to track the process state of the study. States include <empty> (state unknown), frozen (DICOM objects exist but unprocessed and uncached), cold (processed but cache data removed or obsolete), cooking (partially processed) and cooked (fully processed and cached). The value can be displayed on the worklist.

Provide notification/tools to resolve users with weak password hashes

A command line tool, ~/component/tools/checkWeakPasswords.sh, exists to identify and update user accounts using weak password hashes. This tool is added to a cron job to run once per day and if accounts are found a notification message is posted to administrator accounts.

Other list filter changes are discarded when the user leaves the page

Some list pages, including the Other List page, have been updated to remember the applied filters and sort order, like the Worklist and other pages, so when returning to the page, the previous content appears rather than reloading the default page.

Prohibit nonsensical name and date formats

When configuring name, date and time formats, the system checks for anomalies such as duplication of a field component and rejects the request.

Support for saving and restoring the profile from the viewer

The server supports the viewer’s requests to save and delete a user profile, return the list of saved user profiles, and restore a user profile.

Remove weak passwords when importing user accounts

When importing user accounts from a backup file, the system checks the password hash and removes the weak ones. These users will need to reset their password when logging in. The affected accounts are listed in the import log file.

Add proper Display Name to all Tasks (Sub-job's description on the Admin/Tasks page)

Some task entries on the Tasks page, specifically system tasks on the Sub-jobs page, were missing descriptions or displayed a generic description. These tasks now display a representative description in the Tasks page table.

Create a load balancer component

A load balancer (haproxy) component has been created to launch the load balancer when the system initializes. The load balancer component starts if the server is configured as a load balancer in ~/etc/balancer.role. Default configuration settings exist in the component directory, ~/component/haproxy/config/. Settings can be overwritten by customizing copies of haproxy.cfg.template and syslog.conf.template in ~/var/haproxy/. The haproxy configuration file, haproxy.cfg is created from the template during startup. Proxy log files are stored in ~/var/log/haproxy.log and rotate weekly.

Introduce global/shared resource locking facilities

Resource locking applied to a single server but now that resources can be accessed by multiple servers at the same time (eg, from multiple stream servers), locking needed to be extended across multiple servers.

Generate license for a server that does not run apache

Servers that do not run apache, such as the stream server, database server and load balancing server, do not support GUI-based licensing. Additional instructions are available in the licensing manual for collecting the license request file and installing the license file from the command line.

Add blob fetch support to stream server

UPGRADE NOTICE: Servers using a local (fast) repository need to be configured prior to upgrade. The stream server moves blob data from a remote (slow) repository to local (fast) repository. If the system is not configured with a local cache repository (~/var/localcache.repository), a link must exist to point to the remote repository (~/var/cache.repository) and the system will not attempt to move the data.

Web services enhancements for MCS - Queue length and position

Web services commands have been added to query the MCS server about a job’s position in the queue, QueuePosition(), and the queue length, QueueLength(). See the eRAD PACS Web Services Programmer’s Manual for details.

Support for a custom log4j configuration file to extend/override factory default settings

Log4j has been updated to version 2.18.0. Groovy script has been updated to version 3.0.12. A custom log4j configuration file, log4j2-custom.xml, exists in ~/var/conf to override select settings from the system configuration file. Refer to the template file, ~/component/classes.com/erad/pacs/log4j2-custom.xml, for customization instructions.

Missing GUI setting for changed state

The Changed State setting has been restored to the Server Settings page.

Start/stop servers in the farm in an appropriate order

A command line java tool is available to manually start and stop the hyper+ server farm servers in their proper order, as defined by each server’s role configuration. Options include starting the server farm, stopping the server farm and listing the server groups. Refer to the Jira issue for usage details and startup order dependencies.

New jsp file to load qc output after checking session

Web applications can download the quality control results file, ~/var/quality/qc.html, from a server provided the request comes from a qualified source, meaning a valid eRAD PACS user session ID exists and the account has admin or support rights. The command is cases/showQuality.jsp.

Identify cache state on the PACS worklist

A worklist column, ProcSt, displays the processed (cooked) state of a study’s data, meaning it’s available for streaming. A worklist tool, Reheat Study, is available to manually start processed a study for streaming.

Create hyperdirector service

The service role functionality used to register a service in a server farm has been separated out and now runs on each server as the hyperdirector service. This server is disabled when all services run on a single server.

Repo management should only be running on the appropriate servers

Each storage repository is managed by a single server. Local cache repositories are managed by respective stream and registration servers. Global repositories, including global cache, data, processed and meta repositories, are managed by the application server.

Run Actions only on the app server

In a hyper+ server farm, Actions are run on the application server only.

Review cronjobs and their relations to servers

All cronjobs have been configured to run on applicable servers based on the server’s role. For the complete list of cronjobs and the servers on which they run, refer to the Jira issue. Use crontab -l after rc start completes to get a list of all cronjobs registered for an individual server.

Herpa streaming

Added support allowing the viewer to download herpa data using the streaming channels instead of from the web server.

Limit redundant runs of prepstudy during ingestion/processing

The system checks for running study registration or reprocessing tasks when it initiates the process to prepare the study data for use. If any are found, the preparation task is postponed to avoid repeated processing tasks.

Internal locking of repository handler shall be aware of the repository's shared state

Repositories shared by multiple servers in a hyper+ server farm employ a global locking mechanism managed by the database server. Refer to the isShared setting in the repository handler manual.

Incorporate gwav4 compression

DEPENDENCY NOTICE: This feature requires viewer-9.0.2

The streaming technology has added support for gwav version 4, permitting better initial quality from smaller thumbnail images. The viewer still accepts gwav3 and gwav1, if offered by the server.

Server Build 9.0.1

Download PDF

Minimize cache footprint

All system components, including viewer streaming, web viewer, technologist view, etc., support the single compressed cache data format (cw3). The creation of data in other formats has been terminated.

Track data/processed/cache storage state and manage dead mounts for study data storage

Calls to the repository handler have been replaced with a middle layer that tracks the state of meta data and manages the data accordingly, reporting data location, creating folders, moving data, indicating when folders are inaccessible, etc. The repository handler’s dirty file handling and resolving mechanism remains unchanged. See the updated Repository Handler manual for specific details.

Refactor database access in application code to be database-agnostic

Performance-critical calls to the database have been encapsulated in an abstraction layer so the database is not directly exposed to medsrv.  In addition to providing a common interface, it allows the application to maintain persistent connections to the database.

Facilitate runtime server role selection

Servers can be assigned specific roles to play, including stream server, registration server, database server, application server and web server. The setting is defined in ~/etc/.role. If no specific role is defined, all services are performed.

Stream server jit not creating raw files even if the format is explicitly requested

The common stream server code failed to generate raw files when explicitly requested. While this is irrelevant for v9 (because its stream server doesn’t use raw files), the change was made to the common code base, which v9 does use.

Upgrade java to current stable version

Java has been upgraded to java-17-openjdk-17.0.3.0.7. The system uses the platform’s version of Java.

Upgrade apache/tomcat to the latest stable version

Apache has been upgraded to httpd-2.4.37. Tomcat has been upgraded to version 9.0.63. The system uses a custom build of Tomcat but uses the platform’s Apache.

Upgrade mysql to the latest stable version

REVERSIBITY NOTICE: Once upgraded, the database is modified and no longer compatible with the previous version.

MySQL has been upgraded to version 8.0.26. The system uses the platform’s version of MySQL.

Upgrade DCMTK

The DCMTK library has been updated to version 3.6.7.

Upgrade gwt to the latest stable version

GWT has been upgraded to version 2.9.0.

Upgrade openssl to the latest stable version

Openssl has been upgraded to version 1.1.1k. The system uses the platform’s version of Openssl.

Deleting study when study resides on multiple mounts in the data repository

Studies that exist on multiple repositories (which are possible when a repository was not mounted at some point when the data was updated) cannot be deleted via the user interface or the system. Users are notified of this on the delete review page, and entries are inserted into the log files.

UDI for v9 server

The UDI value for version 9.0 has been updated to 0086699400025590. This value is displayed on the appropriate software identification pages.

Provide a warning sign on the WL for studies that reside on multiple mounts

The Partially Inaccessible column available to indicate when the study resides on multiple repository mounts. This column is hidden by default. Add it to your layout using the Edit Fields tool.

Forwarding study when study resides on multiple mounts in the data repository

Forwarding a study that resides on multiple mount points will result in an error. If initiated from the GUI, the user is notified. If initiated from a forward action, the request will be retried when the action runs again (in five minutes).

Editing study when study resides on multiple mounts in the data repository

Editing a study that resides on multiple mount points will result in an error. If initiated from the GUI, the user is notified. If initiated from an edit action, the request will be retried when the action runs again (in five minutes).

Editing/adding report and notes when study resides on multiple mounts in the data repository

Editing a report or report notes for a study residing on multiple mount points is not supported. If the condition exists, the report add/edit button and the note add/edit button are disabled in the patient folder.

Remove legacy, unimplemented jsp-s

Java servlet functions retired or no longer in use in version 9 have been removed from the code base.

Web Services notification triggered at child did not get sent

Based on timing, an auto-correction message originating at a child server can jump ahead of the first object registration message, allowing third party devices to believe a study exists before it actually does. Auto-correction messages are suspended until the hub server registers at least one object.

Add Study Update to Web Service Device Message Triggers

Web services devices can be configured to receive an order update notification when the study data has been edited. The trigger is enabled when the Study Update setting in the Order Message Triggers section of the web services device edit page is checked. Update sends a notification on new object acquisition, any edit or object re-acquisition. Reindex sends a notification when a study gets reindexed by an admin or the system.

Time based warning message incorrect

The wording of notification message indicating the repository handler had to delete data even though the threshold wasn’t crossed has been changed to more accurately reflect the cause of the problem.

Study with invalid time zone offset value displays empty study date

Objects containing a non-compliant time zone offset value ignore the bad data and present time values as recorded in the object.

Serialize (manage) cw3 thumbnail downloads on tech view page

Downloading CW3 images to the technologist view page and the web viewer need to be managed by the client. A maximum of four images are downloaded in parallel to avoid over loading the browser.

Include list name in logs generated by actions

Log entries, on the Logs page and in the oper_info log, containing details for events resulting from an action, except the Prefecth action, identify the worklist filter that matched the study.

Warn admin when a study is acquired that might nullify the server license

The server’s license is checked against multiple events and data. When one of these is detected but not enough to invalidate the license, the system sends a message notification to administrators. Admins can contact eRAD support for details and ways to avoid a license exception.

Change default media creation engine to local

The default media creation engine defaults to the local MCS. This applies to new installs and upgrades.

Handle lost SQL connections/reconnects from C/C++ more reliably

When the underlying connection to the database is lost, the software transparently reconnects and retries the pending operation.

Remove configuration options for mandatory v9 features

Some features optional prior to version 9 are no longer optional. They are hard configured by default. The settings for these features have been removed from the GUI.

Local cache usage support for registration

The initial registration creates the compressed image files on the local cache repository, before adding them to the blob. This requires the creation of a local cache repository (~/var/localcacache.repository)

Server Build 9.0.0

Download PDF

Design and implement "Meta" storage repository

DICOM data is stored is a separate (meta) repository from processed data.

Add ability to track actual repo location via callback/event notifications

The repo handler supports a callback interface used to track resource locations without needing to use the locate function.

Web service's ForwardStudy operation should handle partial (series/object) forwards

The web services Forward command supports forwarding individual series and objects from the same study to a defined target. See web services manual for details.

Update ServerSettingsConst hierarchy to be enum based

Structural changes applied to improve the handling of server settings.

Report templates are not exported/imported

Report templates are included in the user export and import tools.

Repository handler should do the auto-resolution even if above the fullLimit

The repository handle automatically consolidates studies split between multiple partitions even when the full limit threshold has been exceeded, except when the physical limit has been exceeded. The physical limit is defined by the configuration setting hardFullLimit. The built-in default is 99.9%. This can be overridden in respository.cfg.

Make the rights setting color more visible when using dark mode

The background color of the individual rights fields when using the dark theme has been modified to make the setting indicator more visible.

Return error code by dotcom.ReCollect when recollecting dotcom configuration fails

The command line tool to recollect dotcom information includes options to include a return code when the operation encounters an error or warning.

Back-end script for repo.jsp and validate.jsp needed

The repo.jsp and validate.jsp scripts have been updated to dynamically generate a system session for use in automation tools.

Log import user and user conversion into an upgrade log file

Log entries for importing user accounts and for user conversion (during upgrade) are consolidated into dedicated log files, ~/var/log/UserExport, ~/var/log/UserImport and ~/var/log/UserConversion.

Add "generic title+label" option to report template editor

A generic report template type has been added to support adding Dcstudy fields to a report view or report edit template. See the eRAD Layout XML Customization manual for details.

Change the default of warnmoveTime for the data/dicom.repository

The default for the warnMoveTime changed to five hours for data repositories. For all other repositories, the default remains two days.

Admin GUI feature to review and delete nuked study files

Nuked study files support study data which is used to populate a new web page for reviewing and deleting these files. The Study Cleanup page is available to users with Support rights from the Admin menu. The page is empty by default. Enter  criteria to display a list of up to 5,000 nuked studies. The tools are consistent with those on the Worklist page. When cleaning studies that exist on child servers, start with the child before cleaning up the parent. Cleanup requests and results are logged in the forever log.

Create viewer profile backup file after editing profile from the desktop viewer

When the user updates their viewer settings, the existing profile file is saved as a backup so it can berestored later, if necessary. These backup files are propagated throughout the dotcom.

Default "apply to current content" to no in v8 action lists

The default for the Apply to Current Content setting for all actions has changed to “No”. Existing actions are not affected as long as they remain enabled. Once disabled, the new default shall be used when re-enabled, unless manually overridden during setup.

Baseline server code base on v8.0

eRAD PACS version 8 medsrv build 49, asroot 8.0.1 and platform-7.9.0 make up the starting code base for eRAD PACS v9.0. Modifications have been applied to account for labeling (eRAD PACS v9.0) and packaging (RPMs, etc.)