The PACS Server

New Features and Enhancements

Server Build 9.0.13

Download PDF

Allow filtering for lists with actions 

Two additional columns are available on the Other Lists page to display the number of actions defined using the filter (¬# Actions) and the date and time the list was created or last modified (Last modified date). To filter on a specific action type, call up the context menu in the filter area and select Action Types to display a filter element for the action type. From the list, select the applicable action type and apply it.

Web services allow querying of task information 

Web services client applications can use the monitor command to retrieve system monitoring metrics such as the number of tasks in the task queue. 

Web viewer supports a Thumbnail panel 

The web viewer supports a thumbnail panel. The feature is disabled by default. To enable it, select the thumbnail panel tool on the options panel. When enabled, the series view is suppressed, the image area becomes empty image frames and the study’s series are displayed in a single row horizontally across the top of the frame area or vertically along the left side of the frame area. The thumbnail panel includes a study header containing study identification data followed by each series. If multiple studies exist in the web viewer session, they follow in succession in the thumbnail panel. Users can drag the series from the thumbnail panel into available image frames.

Allow document (attachment) upload via web services interface 

The web services library includes a command to upload an attachment to an existing study. See the fileUpload command in the eRAD PACS Web Services Programmer’s Manual for details. Supported file types include JPG, BMP, TIF and PNG. The system responds the same as if the file was uploaded using the GUI-based upload tool.

Improve import device tool 

The import devices tool, importdevices.sh, added two additional options. The update option, -u, overwrites existing device entries with the data in the data file. The duplicate AE option, -a, suppresses the duplicate AE Title warning and imports the device. Note that the Admin must resolve the duplicate entry manually after the import completes.

List size counts on the other lists page avoided the Query Qualifier 

The list size column on the Other Lists page could not be checked by the query qualifier and, as a result, caused expensive queries. To eliminate the bad queries, the column has been removed. To see the number of studies that match a filter, expand the filter’s row to see the list details. Note that if the query fails the qualifier’s criteria and the user does not have Restricted Query permissions, the Item Count field will show N/A rather than the list size.

Include accession number and patient ID in delete notification messages 

The web services delete notification message includes the patient ID and accession number. See the eRAD PACS Web Services Programmer’s Manual for details.

Expand admin capability to track and take action on weakly hashed passwords 

Admins are notified when user accounts with weak password hashes exist. To get an actionable list of affected accounts, add the Weak Password column to the user accounts page and filter for true entries. 

Device-specific relative priority control 

Admins can assign relative priorities to tasks spawned when data is acquired from specific (registered) DICOM devices. The DICOM device’s configuration page includes a Task Relative Priority setting for inbound and outbound tasks. When data is acquired, the resulting tasks are assigned the inbound priority. Manual and action-initiated forwards apply the outbound priority. Note that auto-forwards apply the inbound device’s priority to the forward task, not the outbound device’s priority. 

Check and remove stuck monitor locks 

To avoid stuck locks that happen if services are stopped while the monitor script is running, the monitor script looks for a lock file and if present, checks to see if the process that locked it is still running. If it is not running, the monitor process deletes the lock and continues executing.

Resent report notification messages to the RIS sent in the wrong order 

Reports and addenda resent to the RIS from the technologist view page are queued in creation order. If the send fails for one object and goes to retry, the reports can arrive out-of-order. An option exists on the device’s outbound messages configuration page labeled Send all reports together. When selected, all report components are sent in a single notification message. For details, see the new notification message, AllReportsNotification, in the eRAD PACS Web Services Programmer’s Manual for details. 

Handle all Secondary Capture Images similarly when calculating min SOP UID

When no modality-specific objects exist, all secondary capture objects, including single frame and multi-frame objects, are considered in the selection of the minimum SOP instance. 

Randomize and log retries of Java side exceptions

When the system encounters an SQL transaction rollback exception, the event is logged, the retry count is bumped up to 20 and the sleep time is randomized to avoid collisions.

Server Build 9.0.12

UPGRADE NOTICE: Upgrades affect existing data. See details below.

REVERSIBILITY NOTICE: Some changes require review if uninstalling this build.

The Web Service interface should be able to check if study resides on multiple mounts

Response messages to a study query request contain a field indicating whether the study resides on a single mount or multiple mounts. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Forward jobs should be controlled on a server farm

REVERSIBILITY NOTICE: If uninstalled, the Forward role assignment, if configured, must be manually removed.

A Forward role has been introduced. The server assigned the Forward role is responsible for all forward tasks, regardless of the source of the request, including manual forwards, auto forwards, forward actions, device auto forwards, etc., with the exception of forwards initiated in response to a DICOM retrieve request. Only one farm server can be assigned the Forward role. By default, the role is assigned to the Application server.

Provide way to "down" a farm server

Admins can down a farm’s registration or stream server from the Admin/Devices/Farm page. A server can be downed indefinitely or for a defined period of time. Downed servers remain active but the load balancer does not direct traffic to it.

Allow a 3rd party web services user to download only IQ images

A viewer client can instruct the server to return the low-resolution initial quality images rather than full-fidelity images using the QA token in the open command. Details are provided in the viewer interface developer manual. Optionally, the session can be configured to always return the initial quality image by setting the SESSIONLSY field in the session table to ‘1’.

Add the ability to get DICOM standard PS for viewer created PS objects

A new web services command, Get PS Object, exists to convert eRAD PACS’s presentation state data to DICOM-conformance objects and download them. Details are available in the eRAD PACS Web Services Programmer’s Manual. Additionally, a command line tool, convPS, is available to perform the same conversion.

Handling MySQL and JDBC retries - Java side

The remaining direct SQL queries have been converted to use SmartPreparedStatement and these queries have been optimized for reuse.

Selectable name filter format

The admin has the ability to configure the person name filter. The feature is configurable in the field label configuration page. The default person name filter is defined by the Use person name filter setting on the Admin/Server settings/Data formats page. It defaults to Simple, meaning a two-value (first + last) name. When configured to None, names are free text fields. The value Full uses the five-field DICOM-compliant name format. Name format settings can be assigned to individual name fields from the Customize Label configuration page.

RepositoryHandler should handle many mounts more effectively

DEPENDENCY NOTICE: To apply the optimization when running in takeover mode, the origin server must be running 7.2 medley-102 or later.

This is the wormhole-based solution needed to eliminate unnecessary searches across multiple mount points when looking for the location of a study. It suppresses study and order creation messages at the repository handler level, and when moving data, the mount location is included in the wormhole message so the replica server doesn’t need to search through all mounts.

Enhance mutexing of get/add/delete notes

All notes were managed through a single locking mechanism, even though notes belong to a single study. To remove delays adding, deleting and retrieving notes in the patient folder, each study manages its own note-locking mechanism.

Evaluate and robust V9 legacy double-leg/dirty studies tools/feature

When a repository is configured to use Authoritative mode, dirty and broken (i.e., multihub) studies cannot be cleaned up automatically. These studies are marked as dirty, so they can be identified and cleaned up manually, and the original study folder is returned to the caller.

Optimize deleteTasksForStudy in the study cleanup

The efficiency of the study cleanup tool was improved by filtering existing tasks using the study’s UID, rather than checking all tasks for related ones.

Multi-hub cleanup should not forward obsoletes to DR by default

The built-in default for propagating deletes when using the multi-hub cleanup tool has been changed to go to dotcom servers but not archives. The setting, Propagate delete from source hub to, can be changed on the Server Settings/ Multi-hub Cleanup page.

Increasing log rotation time

The log rotation time for the info.log and error.log files has changed to 30 days. The setting, LogRotateDays, is configurable in ~/var/conf/self.rec. Note that the logs subdirectory named “week” remains unchanged, even if the rotation time is not seven days.

Series level "delete immediate"

Users can delete series throughout the dotcom and allow it to be resent to the server (i.e., employ the “delete immediate” feature) when the Delete Mode setting on the Server Settings/Study Operations page is set to Delete Immediately.

Add Callback Hook Functionality to v9

The callback hook functionality has been restored. Settings are available on the Server Settings/Register Callback Hooks page. Administrators can define a URL to call when a matching study log event occurs.

Partial data takeover should handle actions automatically

When instantiating a new dotcom server, use the ~/var/conf/actionConvert.conf file to define the IP addresses of the servers whose actions are to be copied over. This allows exported users to retain their configured actions after the import. IPs and IDs not listed in the configuration file will retain their original serverid value and be disabled.

Object level "delete immediate"

Users can delete a select image throughout the dotcom and allow it to be resent to the server (ie, employ the “delete immediate” feature) by checking the “Immediate” box in the confirmation panel displayed when deleting an image from the Technologist View page.

Review declaration of MEM resource for tasks

Removed the system memory resource setting from Dcreg tasks and added it to MCS prepareObject tasks and for multi-frame ultrasound objects.

Launch diagnostic web viewer from worklist

A new worklist tool is available to launch the in-process web viewer. This tool is available only when the web viewer package is installed on the server. This feature is intended for testing purposes only and will be purged when the web viewer testing is complete.

Enhance logging of blob creation and finalization

Additional logging has been added to record details associated with blob creation.

Serve attachments via servlet

Presenting attachments stored as PDF files, particularly in the patient folder, could fail because the software expected the file’s extension to match the code page URL. A servlet has been added to support PDF files appropriately.

Server Build 9.0.11

Update tech view page to use web client SDK

The tech view page has been updated to use the web client SDK, which transfers frame data using streaming protocols rather than HTTP protocols.

WS operation to return key image information

When key images exist in a study, details are included in the ReportData section of the GetStudyDataResponseMsg, including a URL to return a rendered key image. For details, see the eRAD PACS Web Services Programmer’s Manual.

Store key images in processed repository

Key images are stored in the processed repository indefinitely to avoid the need to reheat the entire study when they are requested by a client application or displayed in a report.

Monitoring should send number data only, draw graph on the browser

The monitor page has been updated to receive the data from the server and draw the graph/chart in the browser. When a graph/chart is being generated, a progress bar appears at the top of the data area.

Load canned report templates on demand

When initiating the report page for the viewer or browser page, the server collects and submits the list of report templates and not all the template files. The full report template is downloaded on-demand when loading the report page.

Worklist csv export download notification

List downloads, including those from the worklist table, accounts table, log table, etc., are generated asynchronously and the user is notified when the data is available for download. The status panel with the download button appears next to the session menu.

Check viewer version from activity sign

The viewer submits its version number to the server in each keep-alive message. The server compares it to the user’s configured viewer version and notifies the viewer if they are different.

Latest viewer version returns incorrect viewer

Viewer versions listed on the user’s profile setting page have been sorted by version number so the viewer can identify the latest build.

Media export Information

The media export status panel displays patient and study identifiers for each export job.

Grant users with Admin rights access to study remove tool

The Study Cleanup tool used to fix studies that exist on multiple hub servers has been made available to users with Admin rights.

Add memory management to web viewer

The web viewer manages its memory usage to prevent it from consuming more memory than it needs or exhausting the memory available. When it loads data that exceeds its maximum (512MB), it starts releasing memory. The affected images will be redownloaded when necessary.

Web viewer should only render when the view changes

The web viewer refreshes images at a fixed rate but in certain environments, specifically CITRIX, when the processing was performed by a CPU, as opposed to GPU, this could burden the CPU with unnecessary activity. This has been modified to refresh images only when an animation or mouse event occurs.

Log and monitor reheats in a standard operation log entry

User requests to reheat a study are logged in the operation log file (and database). Additionally, reheat log entries are available to the monitoring tools and can be displayed on the system monitor page (when it includes registration servers).

Protect report's rich text content from pasting bad content

When users paste text containing iframe and script data, it can be misinterpreted. This information is stripped from data pasted into the report panel.

Framework to pass task context to other farm servers in intracom calls

The intracom framework has been updated to include the task context in its calls so when the target server is the calling server, the task can be grouped with related tasks.

Taskd should handle situations when MySQL is not working

If the database is not responding, tasks from the retry queue could go into an orphaned state and never complete. Now, these task threads go to sleep so they can be retried once the database service is restored.

Make ObjectForward tasks collapsible

When a user repeatedly issues a forward request consisting of the same data, the requests are collapsed into a single task and executed only once.

Add debug info to troubleshoot studies remaining in cooked state

Additional logging has been added to monitor cooking and reheating activities, including active study object dumps.

Allow/Ignore DICOM Q/R attributes included in a request below the Query Level

When a DICOM C-FIND request includes series-level attributes in a study-level search or image level attributes as a study- or series-level search, as defined by the Query Level attribute, the server ignores them rather than reports an invalid C-FIND request.

Server Build 9.0.10

Handling MySQL and JDBC retries

The remaining direct SQL queries have been converted to use SmartPreparedStatement and these queries have been optimized for reuse.

Server farm uses a single license

All servers in a server farm reference a single license hosted on the application server. If a server cannot access the application server or no application server is explicitly defined, the software will not run. Additional licensing errors and warnings are available in the license generation manual.

Support cache configuration where no moves ever happen

When the cache repository is clearable, there’s a configuration option, isClearableMmove, to enable and disable moving data. When “false” (default), moves will not happen during checkOverload. The data is deleted instead. Details are available in the repository handler manual.

Change herelo(d) to use dcmtk's openjpeg j2k implementation instead of jasper

The compression used by herelo and herelod has been changed to use the platform’s instance of openjpeg.

Make Monitoring Disk utilization dynamic

Disk monitoring tools can monitor the usage of all drives and partitions on which system data resides. The options on the monitoring GUI are defined using the mount’s label.

StudyRepositoryWrapper shall only manage repos that it is configured to manage

To avoid unnecessary overhead, managing local, reliable mounts use the raw repository manager instead of the advanced repository manager (StudyRepositoryWrapper). When using the raw mode, a runtime warning is logged.

Make hyphen available in userID

The hyphen character is accepted as a supported character for user, group, LDAP and document type IDs.

Add "running tasks" option to qst.sh

The queue status tool, qst.sh, includes an option, running, to display information on the running tasks.

Completed orders shall be openable from the GUI

Completed orders can be opened from the worklist into the main and web viewers.

There should be a MEM resource for Tasks

An independent memory resource is available to restrict the number of concurrent memory-consuming tasks, including ObjectForward, Dcreg and DcCompressDataTask. It applies to objects starting with “BT”. The setting in component/taskd/resources.cfg is System.MEM. The default value is three.

Origin-Replica migration tool: destination

A web service, ImportStudies, exists to register data received from an Origin server. The service creates the study and report database records, creates object or meta database records, updates reference counters, indexes the repository location in the database, and logs the activity.

Allow retrieval to continue despite local copy

Manual retrieve requests and the retrieve action configuration page include an option to override the system’s check for a local copy. The setting is disabled by default. If selected, the entire study is retrieved, overriding the local files, if present.

Extend streamserver logging capabilities

Stream logging has been enhanced to include information used to establish a thread’s affected connection/session. Additionally, the stream logging level can be configured. The logging level, LogLevel, is defined in the stream server configuration file, ~/var/conf/streamserver.conf.

Cross-version device export/import tool

Exported device data can be imported by systems running the same or future versions.

Manage dirty repositories in the wormhole synchronization

The Origin server notifies the Replica server of dirty studies, eg, broken studies existing across multiple hub servers, so the Replica can identify and acknowledge them accordingly.

When monitoring, distinguish critical tasks in the total

The task queue size information displayed on the monitor page separates critical tasks (those with a priority less than or equal to 200) from non-critical tasks. The new field labels are Task queue critical (retry, scheduled) and Task queue failed critical.

Replace strikethrough format for "out-of-fixed-list" values

The notation used to indicate an enumerated field’s value is not in the defined list has changed from red text with a strikeout to orange text underlined by a dashed line.

Move task management related logs out of info.log

Task management log entries have been moved from the info log to the taskd log file, var/log/javataskd.log.

Retire old monitor.jsp

The direct-access URL to the monitors page has been retired. Access the monitors page from the GUI, Admin/Server Information/Monitor.

Tasks page should be able to show number of "critical" tasks

The Tasks page includes the total number of tasks per queue and in parenthesis, the number of these that are assigned a critical priority value (ie, less than or equal to 200).

Further improvements for the reheatStudy script

The study reheating script, rehearStudies.sh, includes an option, -c, to include cold studies in the reheat operation.

Enable User layout calculated fields

Admins can create calculated fields for use on the user account page.

Add new factory default calculated field Last seen to User Accounts page

A built-in calculated field is available for the user account table to display the date and time the account was last used to access the system. The field is labeled Last seen.

Rescheduling script does not consider unprocessed when rescheduling

The script to reschedule suspended tasks, rescheduleSuspendedTasksDbase.sh, now considers unprocessed task files and unprocessed database tasks when reinstating suspended tasks.

Unage should prep (cook) prior studies

When the unage process is applied to a study, the system prepares it for use by generating the necessary cache and processed data.

Add index on MsgBroadcastToUsers.USERID

A database index has been added to the USERID field because the field is included in numerous database queries.

Change folder filtering on the worklist

Filtering out studies that belong to no folder is a special case. The folder field index should not be used in other cases because very few studies belong to folders.

Make memory assigned to Taskd java vm configurable

The memory available to the taskd Java VM is configurable. The parameter is TASKDJAVA_MAXMEMORY in the file ~/var/conf/taskdjava.cfg. The default value is 3GB.

Cache access list contents.

User and group access lists are included in numerous database queries (searching for account permission prior to executing a request), but they rarely change. Access list and compound list sub-lists contents are now cached for five seconds, for quick access with little or no impact on the database.

Minimize access to tier3 storage on study open

Tier 3 storage has few performance requirements and can therefore be slow. The stream server accessed data on tier 3 storage (e.g., meta data, reports) when initiating a streaming request. This access delayed the start of data streaming. The system now checks to see if this data exists in the cache, which is typically fast storage, before accessing it on the tier 3 device.

JIT reheats of a Cold/Frozen study shall occur at higher priority

Reheat requests resulting from a user’s requests to open a cold or frozen study jump to the top of the reheating (processing) queue and employ all available farm servers.

Distributing series of viewer certificates

Updated code signing certificates are stored on the server and made available to the viewer when requested.

Extend taskd with study activity tracking functionality

The time a queued task was last modified is tracked in the task database and available to system components.

Checkoverload logs number of resource folders found while searching for the oldest ones

The number of resource files checked when running checkoverload is included in the file info.log.

Checkoverload times logging

To establish a performance benchmark, the check overload process creates a time-stamped log entry in info.log after every 1,000 hashed folders have been checked.

Improve Origin-Replica data takeover at the Replica

Some improvements to increase takeover performance include reusing the database connection and prepared statements, using a larger buffer with buffered writing, and adding benchmark log entries.

Reuse native DB connection during data takeover

To avoid an unnecessary overhead creating new database connections, the existing database connection is passed to the repository handler when creating storestate.rec files in the meta repository.

Improve data takeover by processing import studies in background threads at the Replica

Imported studies from an Origin server are processed in multiple threads. The number of threads is defined by NumThreads in ~/shared/var/wormhole/import.cfg. The default is four threads per Origin (hub) server. There’s also a configurable setting to use a common database connection, UseCommonConnection. The default is true.

Exception in boost::filesystem results in JVM crash

An anomaly in the boost library (when run on Rocky) failed to report an exception when the software was unable to access a meta directory, resulting in tomcat crashes. The equivalent call to the system library is used instead.

RepositoryDataResolver should not try to resolve data repository when server is a Replica

The replica server is told where the data resides (by the origin server) and therefore should not attempt to resolve the data repository.

Do not attempt auto-correction when in Replica mode

While in data takeover mode, auto-correction is performed by the origin server. The late correction cron job has been disabled on a replica server.

Review role of reheat and reindex for various study states

A full reindex is initiated on a frozen study only when the study and object table and files in the data directory are not in sync. In other cases, such as when new objects arrive, a reheat is performed.

Optimize object table mapping

To optimize the performance of loading data from a data file (blob) into the database, it is performed in a single request using a prepared statement.

Add just-in-time study reheat function to frame to support legacy portal

When requesting thumbnail images (i.e., receiving frame requests) for a cold or frozen study, a reheat is triggered automatically. The request will be blocked until the reheat is done and images are available. If cooking takes longer than the request timeout, or if the blob is incomplete at the time the request is issued, no images will be returned.

Provide interface for streamserver selection via app serverDistributing series of viewer

A web services command, getStreamServer(), is available to determine which stream server a client application should connect to in order to download image data. Refer to the eRAD PACS Web Services Programmer’s Manual for details.

Server Build 9.0.9

Download PDF

Add usage metrics information to the streamer's logs

The stream server log, ~/var/log/streaminfo.log, includes transmission and reception metrics used by the monitoring tools to track streaming statistics.

Add monitor.jsp tool to Server Information page

The monitoring tools are available from the Application server’s Admin/Server Information page to users with Admin or Support rights.

Support Origin/Replica mode

Added support for Origin and Replica modes. An Origin server shares the state of the data/dicom repository(s) with a Replica server using proprietary notifications instead of formal communication mechanisms (such as DICOM forwards). Notifications are available to share a study’s storage location, indicate objects are created and deleted, and convey data repository activity. Includes support for new Origin and Replica device types. Use the device’s Ping command to verify the Origin or Replica device is available and configured to recognize the server. To enable Origin mode, set the Origin attribute in ~/var/conf/self.rec and restart medsrv. To enable Replica mode, set the Replica attribute in ~/var/conf/self.rec and restart medsrv.

Support Origin/Replica mode for the repo handler

Added repository handler support for Origin and Replica modes where the core repository handler notifies the Replica’s core repository handler when changes are made.

Support Origin/Replica mode for dicom object acquisition/update/delete

Added support for DICOM forward requests initiated on an Origin device by any activity, including acquisition, edit or deletion, made by the user or the system, to send a notification containing the shared DICOM repository on which the data resides to the Replica device, instead of initiating a DICOM forward. Additionally, an Origin device can accept and process edit and deletion updates sent from the Replica device. In most cases, the Origin device performs the operation and then notifies the Replica device to apply the same change.

Add repository management activity to the monitoring module

The monitoring tools include displaying cache, data, processed and other repository management activity (moves, deletes).

Add mysql connection saturation to monitored data

The monitoring tools include displaying the percentage of MySQL connections used.

Eliminate jdbc warning when invoking java from the command line

The reheatStudies.sh script no longer reports deprecated driver classes warnings.

Farm validator enhancements

Some enhancements have been made to the farm validator, including mounts on shared repositories are checked as well as the repository root; cache, data, meta, tempdata, processed, user and shared folders are checked on applicable farm servers only; failure messages are reported in the application server’s rc.log file; data repository sharing is not checked on servers running in Replica mode; and shared folder error messages indicate this detail rather than using a generic description.

Retire Post Process action and create Prepare Study action

UPGRADE NOTICE: Existing Post Process actions, if any, are disabled when action.jsp runs. Since prefetching has been retired, the Post Process action is unnecessary. It has been retired as well. A new action, called Prepare Study is available to reheat a study. The applicable action settings are available on the Prepare Study configuration page, available from the Other Lists table. When the action runs, details are logged in the info log.

Manage "dead mounts" more robustly against temporary database issues

When the database is unavailable, the system does not assume the mount is inaccessible and start creating dead folders.

Study status should not be cooked if tasks are queued on another registration server

The Cooked status applies when all objects acquired on any registration server have been processed. When a single registration server completes processing and detects another registration server has unprocessed objects, the study’s processing state is set to Partial.

Optimize db access in repohandler

The repository handler uses the existing database access facilities to avoid creating and closing new connections to the database every time the check overload function checks the repository handler’s dirty state.

Origin default user cannot be set to mandatory

When defining an Origin or Replica device, the Service User setting is required.

Webservice to provide study keys for third party clients

A token is used to indicate credentials have been verified for access to specific studies. Web services clients can request these keys using the StudyQuery command. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Webservice queries to support compound queries

The web services interface includes a StudyQuery command that supports compound queries, allowing an OR within the column field and an AND across the columns. For example, select all studies with a patient ID of X and a study date of Y. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Webservice call to query studies with priors

The web services interface has a StudyQuery command that includes an option to return a study and its priors, including those that don’t match the access restriction. The priors are uniquely identified in groups encoded in the results. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Webservice call to provide study location on the shared archive

The web services interface can return the location of a study on the data repository. The StudyQuery command can include an option to return the location on the storage repository. Details are available in the eRAD PACS Web Services Programmer’s Manual.

Web viewer open should trigger reheat on cold studies

When the web viewer requests images for an unprocessed study, it generates them on-the-fly by initiating a processing event. As the images become available, they are streamed to the web viewer and displayed.

Server Build 9.0.8

Download PDF

Add object table cache handling mechanism

REVERSIBILITY NOTICE: If uninstalling, object table entries purged by this feature must be reloaded beforehand by invoking the touch scripts manually. To prevent the object table from growing indefinitely and storing large amounts of unused data, the system purges the least used records. When object data is needed, the system restores it on the fly from the study’s meta data (i.e., the blob). The time period data remains in the object table is defined by ObjectCacheTimeout in ~/var/conf/self.rec. The default is 10 days. Checking for and purging expired data occurs hourly from a cron job. The script is CleanupObjectTable. It can be invoked manually from the command line, if necessary.

Function to safely remove deleted study

This class of core functions enables support for safely removing deleted studies from the system via the GUI. They remove all remnants of a study from a server farm, including the repository resources, database records, task files, reference counters, locks and temporary files.

Create self-contained cw3/4 web rendering module

An SDK using javascript has been developed and deployed to enable web clients (browsers) to download and render cw3 and cw4 image files. Toolkit details are available in the eRAD PACS Web Client Image Library manual.

Cleanup deleted and bad-state studies

The Study Cleanup page includes the list of study records in a deleted state and a tool to remove them. The expanded row lists the related studies. Log entries exist for each study removed from this GUI feature.

Raise mysql connection limit

The built-in MySQL connection limit has been raised from 150 to 600. Additionally, the connection pool size has been increased to 32. See HPS-445 for subsequent adjustments.

Viewer-compatible localization of the user profile manager

The viewer configuration setting labels on the copy settings page use the customized resource labels employed by the viewer, making the labels consistent between the viewer and web page.

Build web client SDK as part of epserver

The web client SDK is compiled and packaged as part of the epserver build process.

Increase list filter expression database field length

REVERSIBILITY NOTICE: Filters exceeding 2048 characters created after installing this change will be truncated if it is uninstalled, generating unintended results. In previous versions, worklist and other list page filters were stored in files, permitting filter parameters of unlimited length. Since moving filters to the database, a filter length limit is imposed. This length has been increased to 32K. Attempts to save longer filters from the GUI results in a warning. Attempts to import longer filters during an upgrade will result in truncation and invalid results.

Possible repository handler caching issues

When accessing multiple image objects from the same study, the repository wrapper intended to efficiently manage repository access was mired in overhead (locking, database access, etc.) before it hit the cache manager. This was resolved by moving the cache management before the wrapper.

Convert invalid user preference value to default and save

Some user preference settings, including worklist poll time and web viewer dynamic help labels, were not converted to current values when the system was upgraded from v7. When detected, these settings are now converted to the system default value and saved automatically. Log entries exist indicating the system made these changes.

Additional performance info needed to benchmark reheat image tasks

Additional performance metrics have been added to analyze system performance when reheating studies.

Over-locking impacts performance when processing the same study on multiple threads

When multiple threads process the same object at the same time, a race condition could negatively impact performance because cache locking was at a global level when it could (should) be localized.

Improve reheatStudies script

Some enhancements to the reheat script have been applied, including better completion handling, using environment variables when available, cleaning up cache before starting the reheat process, and using relative priority assignments.

Server Build 9.0.7

Download PDF

Tool to validate the state and connectivity of all servers

The script ~/component/tools/validateFarm.sh is available to check the state and configuration of all farm servers. This script should be run on the application server. The tool is available from the GUI (Admin/Devices/Farm page) to users with Admin rights. The output lists detected errors, misconfigurations and invalid states. The output differs when medsrv (specifically, the hypervisor service) is running on all servers versus when it is not. See the Jira issue for which checks are performed based on the running state.

System initiated (automatic) series forward

Device-specific outbound coercion rules are available to filter series and objects when forwarding objects to registered DICOM devices. The feature uses the PROCESS control variable to indicate when to stop processing a specific object. When the variable evaluates to NULL(), the (forward) request for the affected object stops. Skipped objects are identified in oper_info and oper_error log entries. Outbound coercion rules are applied to objects after soft edit changes from PbR objects have been applied. GUI-accessible configuration panels are available on the Devices pages. Preceding and trailing outbound rules applicable for all devices can be configured on the Admin/Devices page. Device-specific outbound rules can be configured on a device’s Edit page. These coercion rules do not apply to forwards initiated in response to a DICOM Retrieve (C-MOVE) request. For instructions using the PROCESS control variable and defining coercion rules, refer to the eRAD PACS Data Coercion manual.

Create a support tool to reprocess all or select studies with keeping the original LRU queue

The script ~/cases/reheatStudies.sh is available to reprocess (reheat) all studies whose cached data files have a ReceivedDateTime before a defined date and time. The output lists all studies in the cache repository and whether or not they are processed or skipped.

Make list DB conversion more robust

Additional checks added to assure v7 user accounts are converted into v9 user accounts. This feature also permits applying the conversion process to converted accounts, if necessary. Remove the user account from the database and the account files will be reprocessed when the user logs in again.

Create script consumable output for hdclient printroles

The hdclient tool has a new argument, -s, that creates output in a computer-readable format.

Implement authentication on stream server

To enable the viewer to authenticate a user’s session, the stream server passes it the session ID.

Minimize moves in the repository by not insisting on deleting the oldest resource

Any cached study within a configurable range of time is considered purgeable when performing the scheduled (nightly) cache purging exercise. By default, the configurable range is 5% of the defined time range. Configuring the tool to 0% results in strict adherence to the purge time range, making it backward compatible with previous versions. The setting is deleteOld and resides in ~/repositorypart.cfg in the mount’s root directory.

Support sharing files amongst servers in a server farm

UPGRADE NOTICE: This enhancement creates a shared directory with two subdirectories if it has not been created prior to the install. In a server farm, these directories must be shared between all farm servers except the database and load balancer servers prior to starting medsrv. A shared directory, /home/medsrv/shared, must be created on each server, except the database and load balance, for sharing files between servers in a server farm. The directory requires two sub-directories, ~/tmp and ~/var. Details for creating the new directory are in the Shared storage requirements section of the eRAD PACS Manufacturing Manual.

Store rendering parameters along with images

UPGRADE NOTICE: All cached data needs to be reprocessed to insert additional information into the data files (blobs).

REVERSIBILITY NOTICE: Reprocessed cache files contain additional data that is incompatible with older versions of the software.

Rendering parameters for all clients are stored with the pixel data in the server cache files (blobs). Existing cache data needs to be reprocessed to add these missing details. This new file format is indicated by the.ei4 file extension.

Allow manual override for repository mount's isDedicated flag

UPGRADE NOTICE: To avoid unnecessary space calculations, this new setting should be manually created and net to “true” for any repository whose root and first mount is a single file system. If a repository’s root and first mount is a single file system, the system unnecessarily calculates the size of the repository every night when making space. To avoid this, a configuration setting, forceDedicated, in the repository’s repositorypart.cfg file is available. When set to “true”, the space checking script skips the size calculation for the associated repository.

Server Build 9.0.6

Handle delete immediate and nuked flags

UPGRADE NOTICE: This feature introduces a new repository called ~/data/tempmeta.repository for storing nuked flags and related files. (See Jira issue for affected files.) It is created during medsrv start. The repository must be shared between all farm servers.

REVERSIBILITY NOTICE: Data in the tempmeta repository is not recognized by previous versions, resulting in invalid data states if downgraded.

Support for deleting studies in a v9 server farm has been completed, including access to the delete and nuked state across multiple registration servers, support for partial deletes from the application server, purging from storage devices (delete immediate requests), and deletes in PbR objects received from external devices.

Detailed task logging should be configurable

A configuration option is available to disable running time calculations in log entries of successful tasks. When INFOLOG_SECONDS exists in ~/var/conf/taskd.conf, running times are suppressed if the task completed successfully within the defined number of seconds. Running times of failed or retried tasks are included in the entries regardless of the configuration setting.

Inherited user preferences/settings - GUI configuration

Group and system default settings are configurable from the GUI. The configuration page is accessible from the Preferences section of the Admin/Server Settings/Web Server page. Select the source account and then define one or more target accounts. Assign settings by checking the box in the settings section. Only checked settings will be copied to the target account(s). Use the search field to find a specific setting. (The section will be expanded.) Click Toggle Summary to review the changes to apply. Click Confirm to apply the changes. When done and after applying changes, close the panel by clicking the Cancel button in the bottom, right corner.

"Converted invalid worklist polltime value" log message spam

When certain system configuration settings contain an invalid value, the built-in default value is applied, a message is logged in the log file (maximum once per day), the administrator is notified via a message in the GUI messaging tool, and if encountered during startup, a warning is written to stdout.

Automatically manage isShared setting for repositories, Phase I

Since a server knows which repositories are local, the software can manage the sharing setting for them. To prepare for identifying local repositories and configuring them as not shared, the default shared setting for all repositories is set to true, eliminating the need to manually configure each one individually.

Server Build 9.0.5

Support WS API call to prepare (cook) the study

Web services command PrepareStudy() is available to process and cache a study on the PACS system. See details in the eRAD PACS Web Services Programmer’s Manual.

Additional output for user and device import/export tools

UPGRADE NOTICE: The output of the import and export devices tool’s listing option has changed. The device import and export tool list option, -l, dumps the device’s configured DICOM services. The device import tool supports a new command line option, -s, to list the devices configured with workflow triggers (autortv, autofwd, etc.) The user account import tool supports a new command line option (-a) to list the accounts with enabled actions.

SessionException server log is not informative enough

Server error log entries for session exceptions include the cause statement and the stack trace data.

Handling MySQL and JDBC retries - Java side - Part I

Database calls initiated from Java code use thread-local database connections to support retries.

Quick compatible fix for the cw4 compression error

A temporary fix has been applied to gwav4 compression to limit the frequency band traversals to five bands, making it similar to gwav3 which does not exhibit the data overrun condition. Note that affected studies (ie, those with the overrun condition) must be reprocessed.

Server Build 9.0.4

DEPENDENCY NOTICE: Dependencies exist. See details below.

Separate streamserver component - interface to load balancer

The streamserver component can be assigned by the load balancer.

Port and deploy websocket probing tool

A new tool, ~/component/dcviewer/bin/websockcli, is available for testing the availability of the web socket port. The tool must be invoked using a fully qualified websocket URI.

Ability to observe all tasks running across the system in central location

The Tasks page on the web (application) server displays tasks for all servers in the server farm. Tasks from the server displaying the page are displayed by default. Tasks from other servers are displayed collapsed and can be expanded by clicking the top line of the server’s section. Independent task filtering is supported.

Global rc start doesn't show when a server doesn't start

When invoking the global rc start command, no output was generated on stdout, making it difficult to see what started and what conditions, if any, exist. Now the tool displays the output from each server included in the global startup. The output is grouped by server.

Disable select batch worklist action tools

When batch-selecting multiple worklist rows, the split study, scan, upload attachments and technologist view tools are all disabled. When batch selecting all worklist rows and when selecting a combination of orders and studies, all three open tools are disabled as well.

Identify cache state on worklist in a WS client

The web services interface supports retrieving the cache repository state of a study. The field, Preparing Status (CPST), is available in GetStudyData and Query responses. For details, see the EP Web Services Programmer’s Manual.

Password field on Password Reset page is limited to 16 characters

The password field on the password reset page imposed a limit that did not exist on other pages. All pages permit assigning passwords of unlimited length.

Fill study edit page dropdown lists with distinct database values

Items in selection lists on the study edit page include values stored in existing study records as well as the list values defined by the field’s configuration when the field’s settings (editable from the Customize Labels page) have Limit selection to List Values checked and Is strict enum unchecked.

Add transparent proxy support to haproxy configuration template for DICOM

The proxy server is configured to use transparent proxy mode by default.

Track cache blob changes during viewer session

DEPENDENCY NOTICE: This feature requires viewer-9.0.4 or later.

When the contents of a blob in global cache changes, the viewer gets notified so it can decide whether or not to reload the image data.

Server Build 9.0.3

Include blobtest command line tool part of the deployment

A tool to manipulate blob files, ~/component/imutils/bin/blobtest, is available for use from the command line. Invoke the command with the --help argument for usage information.

Viewer profile checksum

The viewer adds a checksum to the profile when saving it and the server calculates a checksum and assures it matches the submitted checksum before it overwrites the saved profile. When the viewer requests the checksum from the server for validation, the server sends the calculated checksum.

App server should call Reg server to run DCReg

UPGRADE NOTICE: The temporary DICOM storage folder has moved to the repository root. Registration processes initiated by the application server are redirected to the registration server using the intracom service. This feature includes a change to the temporary DICOM storage folder. When the DICOM repository is configured with no mount points, DICOM files are placed in the DICOM repository root folder, ~/data/dicom.repository/tmp (instead of ~/data/tmp). This makes the process consistent with handling repositories with multiple mount points and makes the data created by the application server accessible from the registration server(s).

Disable jit image creation from techview

To avoid unnecessary error messages in the log, jit image processing has been disabled (temporarily) when loading an unprocessed study in the technologist view page.

Support opening non-cached ("uncooked") studies - back end

In order to notify users that the study they are attempting to display is unprocessed, the server needs to check the processing status plus the state of scheduled processing tasks. Once it has the state information, it provides the information to the calling entity so the user can be notified of delays caused by the just-in-time processing effort. An additional interface exists to allow the viewer to monitor the number of processing tasks so it can report the status as it completes.

GUI to restore viewer profile from backup

Administrators can restore a user’s or group’s viewer profile from the available backups using the Profile Backups page available from the user and group accounts page’s Manage Viewer Profile tool. The admin can create, delete and restore backups created by the system and user.

Framework to communicate among servers in a server farm

An interface framework (component) has been added to pass commands and jobs to the server performing a role that it itself does not provide, or to balance the load across multiple servers performing the same role. The component is called intracom. It uses port 4651, which can be overridden by INTRACOM_SERVICE_PORT in ~/etc/virthosts.sh. It starts the intracom service which accepts and services gRPC requests from other servers in the server farm. This service is currently started on application and registration servers.

Inbound filtering based on coercion rules

Control variables have been added to the (inbound) coercion rule command library. Control variables start with an ampersand (@)and use upper case characters. A single control variable has been introduced: @PROCESS. If the rule assigned to the control variable evaluates as NULL, (storing, forwarding, etc.) processing with stop. A log entry is registered indicating this. For all other results, processing continues. Note: at this time, control variables are recognized by pb-scp only. Refer to the eRAD PACS Data Coercion manual for details.

Device-specific selective autoforward (sync) feature

The device auto-forward setting instructs the system to send all objects acquired from third party devices to it, except for objects the device sent itself. Updates to objects are also sent (i.e., objects applicable to the “keep sending updates'' setting.) The limitation is new data generated by the system for a study that originated from the configured device will not be sent to the device. A feature has been added that instructs the system to auto forward everything it did before, plus any object created on the system. In this way, presentation states and secondary capture objects created by the user and added to the study will be sent to the device from which the study originated, assuring both systems have the same collection of objects at all times. The setting is available as a checkbox labeled Sync in the DICOM services/settings section of the device edit page.

Server Build 9.0.2

Separate stream server component

The stream server component has been modified to run independently of other medsrv components. Stream server devices are assigned streaming sessions in a round-robin fashion. As a result, for a given session ID, the same stream server is presented so the viewer can reuse existing connections, when possible.

Separate ingestion server component

Data ingestion has been separated into a dedicated role and dubbed the Registration server as part of the baseline framework effort.

Design and implement the revised "Processed" storage

Data processing has been overhauled as part of the baseline effort to minimize iops by storing data as blobs in single files.

Design and implement the revised "Cache" storage

Data caching has been overhauled as part of the baseline effort to minimum iops by storing cache data as blobs in fewer files.

Review and redesign the DB schema

The database has been overhauled as part of the baseline effort to eliminate inefficient and unused fields, store new data such as a study’s processed state and repository location, and support for object information that existed in the retired object table.

Optimize SQL database access implementation

As part of the overall refactoring, connections to the SQL server persist. The framework caches prepared statements for reuse.

Handle the situation when study resides on multiple mounts in the data repository

This is the application of the repository handler’s new middle layer for tracking the state of meta data in the repositories and handling the existence of data on multiple repositories.

Upgrade poco to the latest stable version

Poco version 1.11.2 is installed.

Avoid blocking for non-responsive network storage

When a networked storage device is unreachable, access requests timeout and the device is taken offline so subsequent requests can complete. While offline, access requests to the device are ignored. The system backs off for five minutes, checking the device after each period until it is back online.

Retire obsolete components

Obsolete components have been removed from the code base, including applet, pref, ct and pcre. Some medsrv components have been obsoleted in favor of the platform component, including curl, boost and openssl.

Rewrite Customize Labels page from jsp to GWT

The Customize Labels page used to customize the database has been updated to use GWT and adopt a look and feel similar to other web pages. All existing features remain, including the ability to configure individual settings for most database fields and the ability to create and modify calculated fields. Some minor differences exist as a result of changes to the associated feature, not because of the update to GWT. See the user help pages for details.

Enumerated filters should support free text search

Worklist columns defined as enumerated lists might contain values not present in the configured list of values. A free text field is available in the filter panel so these values can be entered as search criteria.

Drag and drop of multi-value filters

Multi-value fields such as Modality allow filtering on multiple values. Users drag the value into the filter panel. Individual values are separated by backslash characters.

Track study process state across the system

A study field, PROCSTAT, has been created to track the process state of the study. States include <empty> (state unknown), frozen (DICOM objects exist but unprocessed and uncached), cold (processed but cache data removed or obsolete), cooking (partially processed) and cooked (fully processed and cached). The value can be displayed on the worklist.

Provide notification/tools to resolve users with weak password hashes

A command line tool, ~/component/tools/checkWeakPasswords.sh, exists to identify and update user accounts using weak password hashes. This tool is added to a cron job to run once per day and if accounts are found a notification message is posted to administrator accounts.

Other list filter changes are discarded when the user leaves the page

Some list pages, including the Other List page, have been updated to remember the applied filters and sort order, like the Worklist and other pages, so when returning to the page, the previous content appears rather than reloading the default page.

Prohibit nonsensical name and date formats

When configuring name, date and time formats, the system checks for anomalies such as duplication of a field component and rejects the request.

Support for saving and restoring the profile from the viewer

The server supports the viewer’s requests to save and delete a user profile, return the list of saved user profiles, and restore a user profile.

Remove weak passwords when importing user accounts

When importing user accounts from a backup file, the system checks the password hash and removes the weak ones. These users will need to reset their password when logging in. The affected accounts are listed in the import log file.

Add proper Display Name to all Tasks (Sub-job's description on the Admin/Tasks page)

Some task entries on the Tasks page, specifically system tasks on the Sub-jobs page, were missing descriptions or displayed a generic description. These tasks now display a representative description in the Tasks page table.

Create a load balancer component

A load balancer (haproxy) component has been created to launch the load balancer when the system initializes. The load balancer component starts if the server is configured as a load balancer in ~/etc/balancer.role. Default configuration settings exist in the component directory, ~/component/haproxy/config/. Settings can be overwritten by customizing copies of haproxy.cfg.template and syslog.conf.template in ~/var/haproxy/. The haproxy configuration file, haproxy.cfg is created from the template during startup. Proxy log files are stored in ~/var/log/haproxy.log and rotate weekly.

Introduce global/shared resource locking facilities

Resource locking applied to a single server but now that resources can be accessed by multiple servers at the same time (eg, from multiple stream servers), locking needed to be extended across multiple servers.

Generate license for a server that does not run apache

Servers that do not run apache, such as the stream server, database server and load balancing server, do not support GUI-based licensing. Additional instructions are available in the licensing manual for collecting the license request file and installing the license file from the command line.

Add blob fetch support to stream server

UPGRADE NOTICE: Servers using a local (fast) repository need to be configured prior to upgrade. The stream server moves blob data from a remote (slow) repository to local (fast) repository. If the system is not configured with a local cache repository (~/var/localcache.repository), a link must exist to point to the remote repository (~/var/cache.repository) and the system will not attempt to move the data.

Web services enhancements for MCS - Queue length and position

Web services commands have been added to query the MCS server about a job’s position in the queue, QueuePosition(), and the queue length, QueueLength(). See the eRAD PACS Web Services Programmer’s Manual for details.

Support for a custom log4j configuration file to extend/override factory default settings

Log4j has been updated to version 2.18.0. Groovy script has been updated to version 3.0.12. A custom log4j configuration file, log4j2-custom.xml, exists in ~/var/conf to override select settings from the system configuration file. Refer to the template file, ~/component/classes.com/erad/pacs/log4j2-custom.xml, for customization instructions.

Missing GUI setting for changed state

The Changed State setting has been restored to the Server Settings page.

Start/stop servers in the farm in an appropriate order

A command line java tool is available to manually start and stop the hyper+ server farm servers in their proper order, as defined by each server’s role configuration. Options include starting the server farm, stopping the server farm and listing the server groups. Refer to the Jira issue for usage details and startup order dependencies.

New jsp file to load qc output after checking session

Web applications can download the quality control results file, ~/var/quality/qc.html, from a server provided the request comes from a qualified source, meaning a valid eRAD PACS user session ID exists and the account has admin or support rights. The command is cases/showQuality.jsp.

Identify cache state on the PACS worklist

A worklist column, ProcSt, displays the processed (cooked) state of a study’s data, meaning it’s available for streaming. A worklist tool, Reheat Study, is available to manually start processed a study for streaming.

Create hyperdirector service

The service role functionality used to register a service in a server farm has been separated out and now runs on each server as the hyperdirector service. This server is disabled when all services run on a single server.

Repo management should only be running on the appropriate servers

Each storage repository is managed by a single server. Local cache repositories are managed by respective stream and registration servers. Global repositories, including global cache, data, processed and meta repositories, are managed by the application server.

Run Actions only on the app server

In a hyper+ server farm, Actions are run on the application server only.

Review cronjobs and their relations to servers

All cronjobs have been configured to run on applicable servers based on the server’s role. For the complete list of cronjobs and the servers on which they run, refer to the Jira issue. Use crontab -l after rc start completes to get a list of all cronjobs registered for an individual server.

Herpa streaming

Added support allowing the viewer to download herpa data using the streaming channels instead of from the web server.

Limit redundant runs of prepstudy during ingestion/processing

The system checks for running study registration or reprocessing tasks when it initiates the process to prepare the study data for use. If any are found, the preparation task is postponed to avoid repeated processing tasks.

Internal locking of repository handler shall be aware of the repository's shared state

Repositories shared by multiple servers in a hyper+ server farm employ a global locking mechanism managed by the database server. Refer to the isShared setting in the repository handler manual.

Incorporate gwav4 compression

DEPENDENCY NOTICE: This feature requires viewer-9.0.2

The streaming technology has added support for gwav version 4, permitting better initial quality from smaller thumbnail images. The viewer still accepts gwav3 and gwav1, if offered by the server.

Server Build 9.0.1

Download PDF

Minimize cache footprint

All system components, including viewer streaming, web viewer, technologist view, etc., support the single compressed cache data format (cw3). The creation of data in other formats has been terminated.

Track data/processed/cache storage state and manage dead mounts for study data storage

Calls to the repository handler have been replaced with a middle layer that tracks the state of meta data and manages the data accordingly, reporting data location, creating folders, moving data, indicating when folders are inaccessible, etc. The repository handler’s dirty file handling and resolving mechanism remains unchanged. See the updated Repository Handler manual for specific details.

Refactor database access in application code to be database-agnostic

Performance-critical calls to the database have been encapsulated in an abstraction layer so the database is not directly exposed to medsrv.  In addition to providing a common interface, it allows the application to maintain persistent connections to the database.

Facilitate runtime server role selection

Servers can be assigned specific roles to play, including stream server, registration server, database server, application server and web server. The setting is defined in ~/etc/.role. If no specific role is defined, all services are performed.

Stream server jit not creating raw files even if the format is explicitly requested

The common stream server code failed to generate raw files when explicitly requested. While this is irrelevant for v9 (because its stream server doesn’t use raw files), the change was made to the common code base, which v9 does use.

Upgrade java to current stable version

Java has been upgraded to java-17-openjdk-17.0.3.0.7. The system uses the platform’s version of Java.

Upgrade apache/tomcat to the latest stable version

Apache has been upgraded to httpd-2.4.37. Tomcat has been upgraded to version 9.0.63. The system uses a custom build of Tomcat but uses the platform’s Apache.

Upgrade mysql to the latest stable version

REVERSIBITY NOTICE: Once upgraded, the database is modified and no longer compatible with the previous version.

MySQL has been upgraded to version 8.0.26. The system uses the platform’s version of MySQL.

Upgrade DCMTK

The DCMTK library has been updated to version 3.6.7.

Upgrade gwt to the latest stable version

GWT has been upgraded to version 2.9.0.

Upgrade openssl to the latest stable version

Openssl has been upgraded to version 1.1.1k. The system uses the platform’s version of Openssl.

Deleting study when study resides on multiple mounts in the data repository

Studies that exist on multiple repositories (which are possible when a repository was not mounted at some point when the data was updated) cannot be deleted via the user interface or the system. Users are notified of this on the delete review page, and entries are inserted into the log files.

UDI for v9 server

The UDI value for version 9.0 has been updated to 0086699400025590. This value is displayed on the appropriate software identification pages.

Provide a warning sign on the WL for studies that reside on multiple mounts

The Partially Inaccessible column available to indicate when the study resides on multiple repository mounts. This column is hidden by default. Add it to your layout using the Edit Fields tool.

Forwarding study when study resides on multiple mounts in the data repository

Forwarding a study that resides on multiple mount points will result in an error. If initiated from the GUI, the user is notified. If initiated from a forward action, the request will be retried when the action runs again (in five minutes).

Editing study when study resides on multiple mounts in the data repository

Editing a study that resides on multiple mount points will result in an error. If initiated from the GUI, the user is notified. If initiated from an edit action, the request will be retried when the action runs again (in five minutes).

Editing/adding report and notes when study resides on multiple mounts in the data repository

Editing a report or report notes for a study residing on multiple mount points is not supported. If the condition exists, the report add/edit button and the note add/edit button are disabled in the patient folder.

Remove legacy, unimplemented jsp-s

Java servlet functions retired or no longer in use in version 9 have been removed from the code base.

Web Services notification triggered at child did not get sent

Based on timing, an auto-correction message originating at a child server can jump ahead of the first object registration message, allowing third party devices to believe a study exists before it actually does. Auto-correction messages are suspended until the hub server registers at least one object.

Add Study Update to Web Service Device Message Triggers

Web services devices can be configured to receive an order update notification when the study data has been edited. The trigger is enabled when the Study Update setting in the Order Message Triggers section of the web services device edit page is checked. Update sends a notification on new object acquisition, any edit or object re-acquisition. Reindex sends a notification when a study gets reindexed by an admin or the system.

Time based warning message incorrect

The wording of notification message indicating the repository handler had to delete data even though the threshold wasn’t crossed has been changed to more accurately reflect the cause of the problem.

Study with invalid time zone offset value displays empty study date

Objects containing a non-compliant time zone offset value ignore the bad data and present time values as recorded in the object.

Serialize (manage) cw3 thumbnail downloads on tech view page

Downloading CW3 images to the technologist view page and the web viewer need to be managed by the client. A maximum of four images are downloaded in parallel to avoid over loading the browser.

Include list name in logs generated by actions

Log entries, on the Logs page and in the oper_info log, containing details for events resulting from an action, except the Prefecth action, identify the worklist filter that matched the study.

Warn admin when a study is acquired that might nullify the server license

The server’s license is checked against multiple events and data. When one of these is detected but not enough to invalidate the license, the system sends a message notification to administrators. Admins can contact eRAD support for details and ways to avoid a license exception.

Change default media creation engine to local

The default media creation engine defaults to the local MCS. This applies to new installs and upgrades.

Handle lost SQL connections/reconnects from C/C++ more reliably

When the underlying connection to the database is lost, the software transparently reconnects and retries the pending operation.

Remove configuration options for mandatory v9 features

Some features optional prior to version 9 are no longer optional. They are hard configured by default. The settings for these features have been removed from the GUI.

Local cache usage support for registration

The initial registration creates the compressed image files on the local cache repository, before adding them to the blob. This requires the creation of a local cache repository (~/var/localcacache.repository)

Server Build 9.0.0

Download PDF

Design and implement "Meta" storage repository

DICOM data is stored is a separate (meta) repository from processed data.

Add ability to track actual repo location via callback/event notifications

The repo handler supports a callback interface used to track resource locations without needing to use the locate function.

Web service's ForwardStudy operation should handle partial (series/object) forwards

The web services Forward command supports forwarding individual series and objects from the same study to a defined target. See web services manual for details.

Update ServerSettingsConst hierarchy to be enum based

Structural changes applied to improve the handling of server settings.

Report templates are not exported/imported

Report templates are included in the user export and import tools.

Repository handler should do the auto-resolution even if above the fullLimit

The repository handle automatically consolidates studies split between multiple partitions even when the full limit threshold has been exceeded, except when the physical limit has been exceeded. The physical limit is defined by the configuration setting hardFullLimit. The built-in default is 99.9%. This can be overridden in respository.cfg.

Make the rights setting color more visible when using dark mode

The background color of the individual rights fields when using the dark theme has been modified to make the setting indicator more visible.

Return error code by dotcom.ReCollect when recollecting dotcom configuration fails

The command line tool to recollect dotcom information includes options to include a return code when the operation encounters an error or warning.

Back-end script for repo.jsp and validate.jsp needed

The repo.jsp and validate.jsp scripts have been updated to dynamically generate a system session for use in automation tools.

Log import user and user conversion into an upgrade log file

Log entries for importing user accounts and for user conversion (during upgrade) are consolidated into dedicated log files, ~/var/log/UserExport, ~/var/log/UserImport and ~/var/log/UserConversion.

Add "generic title+label" option to report template editor

A generic report template type has been added to support adding Dcstudy fields to a report view or report edit template. See the eRAD Layout XML Customization manual for details.

Change the default of warnmoveTime for the data/dicom.repository

The default for the warnMoveTime changed to five hours for data repositories. For all other repositories, the default remains two days.

Admin GUI feature to review and delete nuked study files

Nuked study files support study data which is used to populate a new web page for reviewing and deleting these files. The Study Cleanup page is available to users with Support rights from the Admin menu. The page is empty by default. Enter  criteria to display a list of up to 5,000 nuked studies. The tools are consistent with those on the Worklist page. When cleaning studies that exist on child servers, start with the child before cleaning up the parent. Cleanup requests and results are logged in the forever log.

Create viewer profile backup file after editing profile from the desktop viewer

When the user updates their viewer settings, the existing profile file is saved as a backup so it can berestored later, if necessary. These backup files are propagated throughout the dotcom.

Default "apply to current content" to no in v8 action lists

The default for the Apply to Current Content setting for all actions has changed to “No”. Existing actions are not affected as long as they remain enabled. Once disabled, the new default shall be used when re-enabled, unless manually overridden during setup.

Baseline server code base on v8.0

eRAD PACS version 8 medsrv build 49, asroot 8.0.1 and platform-7.9.0 make up the starting code base for eRAD PACS v9.0. Modifications have been applied to account for labeling (eRAD PACS v9.0) and packaging (RPMs, etc.)