eRAD
PACS Server
The stream server log, ~/var/log/streaminfo.log, includes transmission and reception metrics used by the monitoring tools to track streaming statistics.
The monitoring tools are available from the Application server’s Admin/Server Information page to users with Admin or Support rights.
Added support for Origin and Replica modes. An Origin server shares the state of the data/dicom repository(s) with a Replica server using proprietary notifications instead of formal communication mechanisms (such as DICOM forwards). Notifications are available to share a study’s storage location, indicate objects are created and deleted, and convey data repository activity. Includes support for new Origin and Replica device types. Use the device’s Ping command to verify the Origin or Replica device is available and configured to recognize the server. To enable Origin mode, set the Origin attribute in ~/var/conf/self.rec and restart medsrv. To enable Replica mode, set the Replica attribute in ~/var/conf/self.rec and restart medsrv.
Added repository handler support for Origin and Replica modes where the core repository handler notifies the Replica’s core repository handler when changes are made.
Added support for DICOM forward requests initiated on an Origin device by any activity, including acquisition, edit or deletion, made by the user or the system, to send a notification containing the shared DICOM repository on which the data resides to the Replica device, instead of initiating a DICOM forward. Additionally, an Origin device can accept and process edit and deletion updates sent from the Replica device. In most cases, the Origin device performs the operation and then notifies the Replica device to apply the same change.
The monitoring tools include displaying cache, data, processed and other repository management activity (moves, deletes).
The monitoring tools include displaying the percentage of MySQL connections used.
The reheatStudies.sh script no longer reports deprecated driver classes warnings.
Some enhancements have been made to the farm validator, including mounts on shared repositories are checked as well as the repository root; cache, data, meta, tempdata, processed, user and shared folders are checked on applicable farm servers only; failure messages are reported in the application server’s rc.log file; data repository sharing is not checked on servers running in Replica mode; and shared folder error messages indicate this detail rather than using a generic description.
UPGRADE NOTICE: Existing Post Process actions, if any, are disabled when action.jsp runs. Since prefetching has been retired, the Post Process action is unnecessary. It has been retired as well. A new action, called Prepare Study is available to reheat a study. The applicable action settings are available on the Prepare Study configuration page, available from the Other Lists table. When the action runs, details are logged in the info log.
When the database is unavailable, the system does not assume the mount is inaccessible and start creating dead folders.
The Cooked status applies when all objects acquired on any registration server have been processed. When a single registration server completes processing and detects another registration server has unprocessed objects, the study’s processing state is set to Partial.
The repository handler uses the existing database access facilities to avoid creating and closing new connections to the database every time the check overload function checks the repository handler’s dirty state.
When defining an Origin or Replica device, the Service User setting is required.
A token is used to indicate credentials have been verified for access to specific studies. Web services clients can request these keys using the StudyQuery command. Details are available in the eRAD PACS Web Services Programmer’s Manual.
The web services interface includes a StudyQuery command that supports compound queries, allowing an OR within the column field and an AND across the columns. For example, select all studies with a patient ID of X and a study date of Y. Details are available in the eRAD PACS Web Services Programmer’s Manual.
The web services interface has a StudyQuery command that includes an option to return a study and its priors, including those that don’t match the access restriction. The priors are uniquely identified in groups encoded in the results. Details are available in the eRAD PACS Web Services Programmer’s Manual.
The web services interface can return the location of a study on the data repository. The StudyQuery command can include an option to return the location on the storage repository. Details are available in the eRAD PACS Web Services Programmer’s Manual.
When the web viewer requests images for an unprocessed study, it generates them on-the-fly by initiating a processing event. As the images become available, they are streamed to the web viewer and displayed.
REVERSIBILITY NOTICE: If uninstalling, object table entries purged by this feature must be reloaded beforehand by invoking the touch scripts manually. To prevent the object table from growing indefinitely and storing large amounts of unused data, the system purges the least used records. When object data is needed, the system restores it on the fly from the study’s meta data (i.e., the blob). The time period data remains in the object table is defined by ObjectCacheTimeout in ~/var/conf/self.rec. The default is 10 days. Checking for and purging expired data occurs hourly from a cron job. The script is CleanupObjectTable. It can be invoked manually from the command line, if necessary.
This class of core functions enables support for safely removing deleted studies from the system via the GUI. They remove all remnants of a study from a server farm, including the repository resources, database records, task files, reference counters, locks and temporary files.
An SDK using javascript has been developed and deployed to enable web clients (browsers) to download and render cw3 and cw4 image files. Toolkit details are available in the eRAD PACS Web Client Image Library manual.
The Study Cleanup page includes the list of study records in a deleted state and a tool to remove them. The expanded row lists the related studies. Log entries exist for each study removed from this GUI feature.
The built-in MySQL connection limit has been raised from 150 to 600. Additionally, the connection pool size has been increased to 32. See HPS-445 for subsequent adjustments.
The viewer configuration setting labels on the copy settings page use the customized resource labels employed by the viewer, making the labels consistent between the viewer and web page.
The web client SDK is compiled and packaged as part of the epserver build process.
REVERSIBILITY NOTICE: Filters exceeding 2048 characters created after installing this change will be truncated if it is uninstalled, generating unintended results. In previous versions, worklist and other list page filters were stored in files, permitting filter parameters of unlimited length. Since moving filters to the database, a filter length limit is imposed. This length has been increased to 32K. Attempts to save longer filters from the GUI results in a warning. Attempts to import longer filters during an upgrade will result in truncation and invalid results.
When accessing multiple image objects from the same study, the repository wrapper intended to efficiently manage repository access was mired in overhead (locking, database access, etc.) before it hit the cache manager. This was resolved by moving the cache management before the wrapper.
Some user preference settings, including worklist poll time and web viewer dynamic help labels, were not converted to current values when the system was upgraded from v7. When detected, these settings are now converted to the system default value and saved automatically. Log entries exist indicating the system made these changes.
Additional performance metrics have been added to analyze system performance when reheating studies.
When multiple threads process the same object at the same time, a race condition could negatively impact performance because cache locking was at a global level when it could (should) be localized.
Some enhancements to the reheat script have been applied, including better completion handling, using environment variables when available, cleaning up cache before starting the reheat process, and using relative priority assignments.
The script ~/component/tools/validateFarm.sh is available to check the state and configuration of all farm servers. This script should be run on the application server. The tool is available from the GUI (Admin/Devices/Farm page) to users with Admin rights. The output lists detected errors, misconfigurations and invalid states. The output differs when medsrv (specifically, the hypervisor service) is running on all servers versus when it is not. See the Jira issue for which checks are performed based on the running state.
Device-specific outbound coercion rules are available to filter series and objects when forwarding objects to registered DICOM devices. The feature uses the PROCESS control variable to indicate when to stop processing a specific object. When the variable evaluates to NULL(), the (forward) request for the affected object stops. Skipped objects are identified in oper_info and oper_error log entries. Outbound coercion rules are applied to objects after soft edit changes from PbR objects have been applied. GUI-accessible configuration panels are available on the Devices pages. Preceding and trailing outbound rules applicable for all devices can be configured on the Admin/Devices page. Device-specific outbound rules can be configured on a device’s Edit page. These coercion rules do not apply to forwards initiated in response to a DICOM Retrieve (C-MOVE) request. For instructions using the PROCESS control variable and defining coercion rules, refer to the eRAD PACS Data Coercion manual.
The script ~/cases/reheatStudies.sh is available to reprocess (reheat) all studies whose cached data files have a ReceivedDateTime before a defined date and time. The output lists all studies in the cache repository and whether or not they are processed or skipped.
Additional checks added to assure v7 user accounts are converted into v9 user accounts. This feature also permits applying the conversion process to converted accounts, if necessary. Remove the user account from the database and the account files will be reprocessed when the user logs in again.
The hdclient tool has a new argument, -s, that creates output in a computer-readable format.
To enable the viewer to authenticate a user’s session, the stream server passes it the session ID.
Any cached study within a configurable range of time is considered purgeable when performing the scheduled (nightly) cache purging exercise. By default, the configurable range is 5% of the defined time range. Configuring the tool to 0% results in strict adherence to the purge time range, making it backward compatible with previous versions. The setting is deleteOld and resides in ~/repositorypart.cfg in the mount’s root directory.
UPGRADE NOTICE: This enhancement creates a shared directory with two subdirectories if it has not been created prior to the install. In a server farm, these directories must be shared between all farm servers except the database and load balancer servers prior to starting medsrv. A shared directory, /home/medsrv/shared, must be created on each server, except the database and load balance, for sharing files between servers in a server farm. The directory requires two sub-directories, ~/tmp and ~/var. Details for creating the new directory are in the Shared storage requirements section of the eRAD PACS Manufacturing Manual.
UPGRADE NOTICE: All cached data needs to be reprocessed to insert additional information into the data files (blobs).
REVERSIBILITY NOTICE: Reprocessed cache files contain additional data that is incompatible with older versions of the software.
Rendering parameters for all clients are stored with the pixel data in the server cache files (blobs). Existing cache data needs to be reprocessed to add these missing details. This new file format is indicated by the.ei4 file extension.
UPGRADE NOTICE: To avoid unnecessary space calculations, this new setting should be manually created and net to “true” for any repository whose root and first mount is a single file system. If a repository’s root and first mount is a single file system, the system unnecessarily calculates the size of the repository every night when making space. To avoid this, a configuration setting, forceDedicated, in the repository’s repositorypart.cfg file is available. When set to “true”, the space checking script skips the size calculation for the associated repository.
UPGRADE NOTICE: This feature introduces a new repository called ~/data/tempmeta.repository for storing nuked flags and related files. (See Jira issue for affected files.) It is created during medsrv start. The repository must be shared between all farm servers.
REVERSIBILITY NOTICE: Data in the tempmeta repository is not recognized by previous versions, resulting in invalid data states if downgraded.
Support for deleting studies in a v9 server farm has been completed, including access to the delete and nuked state across multiple registration servers, support for partial deletes from the application server, purging from storage devices (delete immediate requests), and deletes in PbR objects received from external devices.
A configuration option is available to disable running time calculations in log entries of successful tasks. When INFOLOG_SECONDS exists in ~/var/conf/taskd.conf, running times are suppressed if the task completed successfully within the defined number of seconds. Running times of failed or retried tasks are included in the entries regardless of the configuration setting.
Group and system default settings are configurable from the GUI. The configuration page is accessible from the Preferences section of the Admin/Server Settings/Web Server page. Select the source account and then define one or more target accounts. Assign settings by checking the box in the settings section. Only checked settings will be copied to the target account(s). Use the search field to find a specific setting. (The section will be expanded.) Click Toggle Summary to review the changes to apply. Click Confirm to apply the changes. When done and after applying changes, close the panel by clicking the Cancel button in the bottom, right corner.
When certain system configuration settings contain an invalid value, the built-in default value is applied, a message is logged in the log file (maximum once per day), the administrator is notified via a message in the GUI messaging tool, and if encountered during startup, a warning is written to stdout.
Since a server knows which repositories are local, the software can manage the sharing setting for them. To prepare for identifying local repositories and configuring them as not shared, the default shared setting for all repositories is set to true, eliminating the need to manually configure each one individually.
Web services command PrepareStudy() is available to process and cache a study on the PACS system. See details in the eRAD PACS Web Services Programmer’s Manual.
UPGRADE NOTICE: The output of the import and export devices tool’s listing option has changed. The device import and export tool list option, -l, dumps the device’s configured DICOM services. The device import tool supports a new command line option, -s, to list the devices configured with workflow triggers (autortv, autofwd, etc.) The user account import tool supports a new command line option (-a) to list the accounts with enabled actions.
Server error log entries for session exceptions include the cause statement and the stack trace data.
Database calls initiated from Java code use thread-local database connections to support retries.
A temporary fix has been applied to gwav4 compression to limit the frequency band traversals to five bands, making it similar to gwav3 which does not exhibit the data overrun condition. Note that affected studies (ie, those with the overrun condition) must be reprocessed.
The streamserver component can be assigned by the load balancer.
A new tool, ~/component/dcviewer/bin/websockcli, is available for testing the availability of the web socket port. The tool must be invoked using a fully qualified websocket URI.
The Tasks page on the web (application) server displays tasks for all servers in the server farm. Tasks from the server displaying the page are displayed by default. Tasks from other servers are displayed collapsed and can be expanded by clicking the top line of the server’s section. Independent task filtering is supported.
When invoking the global rc start command, no output was generated on stdout, making it difficult to see what started and what conditions, if any, exist. Now the tool displays the output from each server included in the global startup. The output is grouped by server.
When batch-selecting multiple worklist rows, the split study, scan, upload attachments and technologist view tools are all disabled. When batch selecting all worklist rows and when selecting a combination of orders and studies, all three open tools are disabled as well.
The web services interface supports retrieving the cache repository state of a study. The field, Preparing Status (CPST), is available in GetStudyData and Query responses. For details, see the EP Web Services Programmer’s Manual.
The password field on the password reset page imposed a limit that did not exist on other pages. All pages permit assigning passwords of unlimited length.
Items in selection lists on the study edit page include values stored in existing study records as well as the list values defined by the field’s configuration when the field’s settings (editable from the Customize Labels page) have Limit selection to List Values checked and Is strict enum unchecked.
The proxy server is configured to use transparent proxy mode by default.
DEPENDENCY NOTICE: This feature requires viewer-9.0.4 or later.
When the contents of a blob in global cache changes, the viewer gets notified so it can decide whether or not to reload the image data.
A tool to manipulate blob files, ~/component/imutils/bin/blobtest, is available for use from the command line. Invoke the command with the --help argument for usage information.
The viewer adds a checksum to the profile when saving it and the server calculates a checksum and assures it matches the submitted checksum before it overwrites the saved profile. When the viewer requests the checksum from the server for validation, the server sends the calculated checksum.
UPGRADE NOTICE: The temporary DICOM storage folder has moved to the repository root. Registration processes initiated by the application server are redirected to the registration server using the intracom service. This feature includes a change to the temporary DICOM storage folder. When the DICOM repository is configured with no mount points, DICOM files are placed in the DICOM repository root folder, ~/data/dicom.repository/tmp (instead of ~/data/tmp). This makes the process consistent with handling repositories with multiple mount points and makes the data created by the application server accessible from the registration server(s).
To avoid unnecessary error messages in the log, jit image processing has been disabled (temporarily) when loading an unprocessed study in the technologist view page.
In order to notify users that the study they are attempting to display is unprocessed, the server needs to check the processing status plus the state of scheduled processing tasks. Once it has the state information, it provides the information to the calling entity so the user can be notified of delays caused by the just-in-time processing effort. An additional interface exists to allow the viewer to monitor the number of processing tasks so it can report the status as it completes.
Administrators can restore a user’s or group’s viewer profile from the available backups using the Profile Backups page available from the user and group accounts page’s Manage Viewer Profile tool. The admin can create, delete and restore backups created by the system and user.
An interface framework (component) has been added to pass commands and jobs to the server performing a role that it itself does not provide, or to balance the load across multiple servers performing the same role. The component is called intracom. It uses port 4651, which can be overridden by INTRACOM_SERVICE_PORT in ~/etc/virthosts.sh. It starts the intracom service which accepts and services gRPC requests from other servers in the server farm. This service is currently started on application and registration servers.
Control variables have been added to the (inbound) coercion rule command library. Control variables start with an ampersand (@)and use upper case characters. A single control variable has been introduced: @PROCESS. If the rule assigned to the control variable evaluates as NULL, (storing, forwarding, etc.) processing with stop. A log entry is registered indicating this. For all other results, processing continues. Note: at this time, control variables are recognized by pb-scp only. Refer to the eRAD PACS Data Coercion manual for details.
The device auto-forward setting instructs the system to send all objects acquired from third party devices to it, except for objects the device sent itself. Updates to objects are also sent (i.e., objects applicable to the “keep sending updates'' setting.) The limitation is new data generated by the system for a study that originated from the configured device will not be sent to the device. A feature has been added that instructs the system to auto forward everything it did before, plus any object created on the system. In this way, presentation states and secondary capture objects created by the user and added to the study will be sent to the device from which the study originated, assuring both systems have the same collection of objects at all times. The setting is available as a checkbox labeled Sync in the DICOM services/settings section of the device edit page.
The stream server component has been modified to run independently of other medsrv components. Stream server devices are assigned streaming sessions in a round-robin fashion. As a result, for a given session ID, the same stream server is presented so the viewer can reuse existing connections, when possible.
Data ingestion has been separated into a dedicated role and dubbed the Registration server as part of the baseline framework effort.
Data processing has been overhauled as part of the baseline effort to minimize iops by storing data as blobs in single files.
Data caching has been overhauled as part of the baseline effort to minimum iops by storing cache data as blobs in fewer files.
The database has been overhauled as part of the baseline effort to eliminate inefficient and unused fields, store new data such as a study’s processed state and repository location, and support for object information that existed in the retired object table.
As part of the overall refactoring, connections to the SQL server persist. The framework caches prepared statements for reuse.
This is the application of the repository handler’s new middle layer for tracking the state of meta data in the repositories and handling the existence of data on multiple repositories.
Poco version 1.11.2 is installed.
When a networked storage device is unreachable, access requests timeout and the device is taken offline so subsequent requests can complete. While offline, access requests to the device are ignored. The system backs off for five minutes, checking the device after each period until it is back online.
Obsolete components have been removed from the code base, including applet, pref, ct and pcre. Some medsrv components have been obsoleted in favor of the platform component, including curl, boost and openssl.
The Customize Labels page used to customize the database has been updated to use GWT and adopt a look and feel similar to other web pages. All existing features remain, including the ability to configure individual settings for most database fields and the ability to create and modify calculated fields. Some minor differences exist as a result of changes to the associated feature, not because of the update to GWT. See the user help pages for details.
Worklist columns defined as enumerated lists might contain values not present in the configured list of values. A free text field is available in the filter panel so these values can be entered as search criteria.
Multi-value fields such as Modality allow filtering on multiple values. Users drag the value into the filter panel. Individual values are separated by backslash characters.
A study field, PROCSTAT, has been created to track the process state of the study. States include <empty> (state unknown), frozen (DICOM objects exist but unprocessed and uncached), cold (processed but cache data removed or obsolete), cooking (partially processed) and cooked (fully processed and cached). The value can be displayed on the worklist.
A command line tool, ~/component/tools/checkWeakPasswords.sh, exists to identify and update user accounts using weak password hashes. This tool is added to a cron job to run once per day and if accounts are found a notification message is posted to administrator accounts.
Some list pages, including the Other List page, have been updated to remember the applied filters and sort order, like the Worklist and other pages, so when returning to the page, the previous content appears rather than reloading the default page.
When configuring name, date and time formats, the system checks for anomalies such as duplication of a field component and rejects the request.
The server supports the viewer’s requests to save and delete a user profile, return the list of saved user profiles, and restore a user profile.
When importing user accounts from a backup file, the system checks the password hash and removes the weak ones. These users will need to reset their password when logging in. The affected accounts are listed in the import log file.
Some task entries on the Tasks page, specifically system tasks on the Sub-jobs page, were missing descriptions or displayed a generic description. These tasks now display a representative description in the Tasks page table.
A load balancer (haproxy) component has been created to launch the load balancer when the system initializes. The load balancer component starts if the server is configured as a load balancer in ~/etc/balancer.role. Default configuration settings exist in the component directory, ~/component/haproxy/config/. Settings can be overwritten by customizing copies of haproxy.cfg.template and syslog.conf.template in ~/var/haproxy/. The haproxy configuration file, haproxy.cfg is created from the template during startup. Proxy log files are stored in ~/var/log/haproxy.log and rotate weekly.
Resource locking applied to a single server but now that resources can be accessed by multiple servers at the same time (eg, from multiple stream servers), locking needed to be extended across multiple servers.
Servers that do not run apache, such as the stream server, database server and load balancing server, do not support GUI-based licensing. Additional instructions are available in the licensing manual for collecting the license request file and installing the license file from the command line.
UPGRADE NOTICE: Servers using a local (fast) repository need to be configured prior to upgrade. The stream server moves blob data from a remote (slow) repository to local (fast) repository. If the system is not configured with a local cache repository (~/var/localcache.repository), a link must exist to point to the remote repository (~/var/cache.repository) and the system will not attempt to move the data.
Web services commands have been added to query the MCS server about a job’s position in the queue, QueuePosition(), and the queue length, QueueLength(). See the eRAD PACS Web Services Programmer’s Manual for details.
Log4j has been updated to version 2.18.0. Groovy script has been updated to version 3.0.12. A custom log4j configuration file, log4j2-custom.xml, exists in ~/var/conf to override select settings from the system configuration file. Refer to the template file, ~/component/classes.com/erad/pacs/log4j2-custom.xml, for customization instructions.
The Changed State setting has been restored to the Server Settings page.
A command line java tool is available to manually start and stop the hyper+ server farm servers in their proper order, as defined by each server’s role configuration. Options include starting the server farm, stopping the server farm and listing the server groups. Refer to the Jira issue for usage details and startup order dependencies.
Web applications can download the quality control results file, ~/var/quality/qc.html, from a server provided the request comes from a qualified source, meaning a valid eRAD PACS user session ID exists and the account has admin or support rights. The command is cases/showQuality.jsp.
A worklist column, ProcSt, displays the processed (cooked) state of a study’s data, meaning it’s available for streaming. A worklist tool, Reheat Study, is available to manually start processed a study for streaming.
The service role functionality used to register a service in a server farm has been separated out and now runs on each server as the hyperdirector service. This server is disabled when all services run on a single server.
Each storage repository is managed by a single server. Local cache repositories are managed by respective stream and registration servers. Global repositories, including global cache, data, processed and meta repositories, are managed by the application server.
In a hyper+ server farm, Actions are run on the application server only.
All cronjobs have been configured to run on applicable servers based on the server’s role. For the complete list of cronjobs and the servers on which they run, refer to the Jira issue. Use crontab -l after rc start completes to get a list of all cronjobs registered for an individual server.
Added support allowing the viewer to download herpa data using the streaming channels instead of from the web server.
The system checks for running study registration or reprocessing tasks when it initiates the process to prepare the study data for use. If any are found, the preparation task is postponed to avoid repeated processing tasks.
Repositories shared by multiple servers in a hyper+ server farm employ a global locking mechanism managed by the database server. Refer to the isShared setting in the repository handler manual.
DEPENDENCY NOTICE: This feature requires viewer-9.0.2
The streaming technology has added support for gwav version 4, permitting better initial quality from smaller thumbnail images. The viewer still accepts gwav3 and gwav1, if offered by the server.
All system components, including viewer streaming, web viewer, technologist view, etc., support the single compressed cache data format (cw3). The creation of data in other formats has been terminated.
Calls to the repository handler have been replaced with a middle layer that tracks the state of meta data and manages the data accordingly, reporting data location, creating folders, moving data, indicating when folders are inaccessible, etc. The repository handler’s dirty file handling and resolving mechanism remains unchanged. See the updated Repository Handler manual for specific details.
Performance-critical calls to the database have been encapsulated in an abstraction layer so the database is not directly exposed to medsrv. In addition to providing a common interface, it allows the application to maintain persistent connections to the database.
Servers can be assigned specific roles to play, including stream server, registration server, database server, application server and web server. The setting is defined in ~/etc/.role. If no specific role is defined, all services are performed.
The common stream server code failed to generate raw files when explicitly requested. While this is irrelevant for v9 (because its stream server doesn’t use raw files), the change was made to the common code base, which v9 does use.
Java has been upgraded to java-17-openjdk-17.0.3.0.7. The system uses the platform’s version of Java.
Apache has been upgraded to httpd-2.4.37. Tomcat has been upgraded to version 9.0.63. The system uses a custom build of Tomcat but uses the platform’s Apache.
REVERSIBITY NOTICE: Once upgraded, the database is modified and no longer compatible with the previous version.
MySQL has been upgraded to version 8.0.26. The system uses the platform’s version of MySQL.
The DCMTK library has been updated to version 3.6.7.
GWT has been upgraded to version 2.9.0.
Openssl has been upgraded to version 1.1.1k. The system uses the platform’s version of Openssl.
Studies that exist on multiple repositories (which are possible when a repository was not mounted at some point when the data was updated) cannot be deleted via the user interface or the system. Users are notified of this on the delete review page, and entries are inserted into the log files.
The UDI value for version 9.0 has been updated to 0086699400025590. This value is displayed on the appropriate software identification pages.
The Partially Inaccessible column available to indicate when the study resides on multiple repository mounts. This column is hidden by default. Add it to your layout using the Edit Fields tool.
Forwarding a study that resides on multiple mount points will result in an error. If initiated from the GUI, the user is notified. If initiated from a forward action, the request will be retried when the action runs again (in five minutes).
Editing a study that resides on multiple mount points will result in an error. If initiated from the GUI, the user is notified. If initiated from an edit action, the request will be retried when the action runs again (in five minutes).
Editing a report or report notes for a study residing on multiple mount points is not supported. If the condition exists, the report add/edit button and the note add/edit button are disabled in the patient folder.
Java servlet functions retired or no longer in use in version 9 have been removed from the code base.
Based on timing, an auto-correction message originating at a child server can jump ahead of the first object registration message, allowing third party devices to believe a study exists before it actually does. Auto-correction messages are suspended until the hub server registers at least one object.
Web services devices can be configured to receive an order update notification when the study data has been edited. The trigger is enabled when the Study Update setting in the Order Message Triggers section of the web services device edit page is checked. Update sends a notification on new object acquisition, any edit or object re-acquisition. Reindex sends a notification when a study gets reindexed by an admin or the system.
The wording of notification message indicating the repository handler had to delete data even though the threshold wasn’t crossed has been changed to more accurately reflect the cause of the problem.
Objects containing a non-compliant time zone offset value ignore the bad data and present time values as recorded in the object.
Downloading CW3 images to the technologist view page and the web viewer need to be managed by the client. A maximum of four images are downloaded in parallel to avoid over loading the browser.
Log entries, on the Logs page and in the oper_info log, containing details for events resulting from an action, except the Prefecth action, identify the worklist filter that matched the study.
The server’s license is checked against multiple events and data. When one of these is detected but not enough to invalidate the license, the system sends a message notification to administrators. Admins can contact eRAD support for details and ways to avoid a license exception.
The default media creation engine defaults to the local MCS. This applies to new installs and upgrades.
When the underlying connection to the database is lost, the software transparently reconnects and retries the pending operation.
Some features optional prior to version 9 are no longer optional. They are hard configured by default. The settings for these features have been removed from the GUI.
The initial registration creates the compressed image files on the local cache repository, before adding them to the blob. This requires the creation of a local cache repository (~/var/localcacache.repository)
DICOM data is stored is a separate (meta) repository from processed data.
The repo handler supports a callback interface used to track resource locations without needing to use the locate function.
The web services Forward command supports forwarding individual series and objects from the same study to a defined target. See web services manual for details.
Structural changes applied to improve the handling of server settings.
Report templates are included in the user export and import tools.
The repository handle automatically consolidates studies split between multiple partitions even when the full limit threshold has been exceeded, except when the physical limit has been exceeded. The physical limit is defined by the configuration setting hardFullLimit. The built-in default is 99.9%. This can be overridden in respository.cfg.
The background color of the individual rights fields when using the dark theme has been modified to make the setting indicator more visible.
The command line tool to recollect dotcom information includes options to include a return code when the operation encounters an error or warning.
The repo.jsp and validate.jsp scripts have been updated to dynamically generate a system session for use in automation tools.
Log entries for importing user accounts and for user conversion (during upgrade) are consolidated into dedicated log files, ~/var/log/UserExport, ~/var/log/UserImport and ~/var/log/UserConversion.
A generic report template type has been added to support adding Dcstudy fields to a report view or report edit template. See the eRAD Layout XML Customization manual for details.
The default for the warnMoveTime changed to five hours for data repositories. For all other repositories, the default remains two days.
Nuked study files support study data which is used to populate a new web page for reviewing and deleting these files. The Study Cleanup page is available to users with Support rights from the Admin menu. The page is empty by default. Enter criteria to display a list of up to 5,000 nuked studies. The tools are consistent with those on the Worklist page. When cleaning studies that exist on child servers, start with the child before cleaning up the parent. Cleanup requests and results are logged in the forever log.
When the user updates their viewer settings, the existing profile file is saved as a backup so it can berestored later, if necessary. These backup files are propagated throughout the dotcom.
The default for the Apply to Current Content setting for all actions has changed to “No”. Existing actions are not affected as long as they remain enabled. Once disabled, the new default shall be used when re-enabled, unless manually overridden during setup.
eRAD PACS version 8 medsrv build 49, asroot 8.0.1 and platform-7.9.0 make up the starting code base for eRAD PACS v9.0. Modifications have been applied to account for labeling (eRAD PACS v9.0) and packaging (RPMs, etc.)