The PACS Server
When registering servers through the hyperdirector, the application server marked the IP it used to make the registration call as primary but did not clear the designation from the IPs from the host configuration information it received from the server, resulting in multiple IPs listed as the primary IP.
The web viewer’s frame of reference index, used to indicate series sharing the same physical space, was misappropriately applied across study boundaries, erroneously linking series from independent studies to each other.
After launching the web viewer with a study having fewer series than in the previous web viewer session, the series view would appear to shake because a scroll setting was not reset correctly.
If a user attempted to delete a series or object from the Technologist page and the study was locked by another user, the delete failed because it was double-locked.
When a user edits a user preference, the log entry failed to record the action type indicating it was a user preference edit.
When a legacy user profile setting is found, it is automatically converted to a new value, eliminating the need to notify the user.
The list of SOP classes a device supports is empty when the source device used the built-in default. When imported, the script interpreted the empty list to mean no SOP classes were supported. Now it recognizes an empty list to mean all default SOP classes.
The worklist filter label field could miscalculate the width of the field, causing labels to appear outside the field limits.
If the user cancels a worklist query that triggers the query qualifier, the resulting table contained no column headers.
The purge action’s object list was updated to add missing SOP classes. The list is configurable by editing ~/var/dicom/storageSOPClasses.lst and restarting apache.
The user preferences dashboard settings listed the Statistics dashlet types as a Messages type.
When an image’s pixel spacing was negative, the system overrode the value with a fixed distance which caused some images to appear compressed. Using the absolute value of the defined value provides a usable distance without causing presentation anomalies.
If the query qualifier suspended a worklist search and the worklist remained active long enough to trigger a worklist refresh, the worklist refresh ignored the suspension state and performed the query without the query qualifier check.
When taskd is stopping, meaning it received the command to terminate but has not yet stopped, new tasks were created. Running tasks that create new tasks now put the task into the (unprocessed) database.
Series and object level auto forwards had was inefficient. If sending updates was applied to the forward, tasks were created to check the entire study for resends, not just the affected series or objects.
If a study is deleted from the GUI, including as a result of applying the study cleanup tool, and there are multiple references to the study, possibly as a result of a failed synchronization message or send request, the duplicate reference would not be deleted, causing new tasks to be collapsed and disappear rather than get cleaned up.
When the user attempted to edit a study’s Status field but selected no value from the list, and then selected another field to edit before clicking the Save button, a java exception notification appeared on the user interface and the edit failed.
If a merge is started but incomplete when the server receives a web services command to edit a study (PbR file, specifically) while the queues are backed up and there are late objects that need to be moved, the server could detect the obsoletion time stamps in the PbR while processing the edit and delete (obsolete) the study.
Based on timing, it is possible the multihub cleanup tool could create and leave temporary files on the target server that might end up in the object table. Note that v9 does not support the multihub cleanup tool – there are no hub servers – but this change exists in common code so it is applied here as well.
The patient folder XML page definitions do not support external reports correctly so if an external report macro is added, the patient folder’s report page will display an exception when attempting to render an external SR report. The restriction causing the exception has been removed.
The list management tools were reviewed and optimized to improve lock management and avoid deadlock conditions. The system locks lists on an individual basis and avoids using global locks.
When using a Chrome or Edge browser and secure HTTP in the URL when the server is configured to force unsecure HTTP protocol, redirected viewer requests went into an infinite loop and the study would not load. The solution is to use secure HTTP when declared, regardless of the server’s secure connection setting.
The java VM failed to set the automatic retry flag in the signal handler and when an interrupt occurred when calling a blocking (locking) function while creating a task file, task creation failed rather than get retried.
Attempts to stop taskd’s JVM while the code was inside a loop trying to connect to a non-existing taskd listener caused the termination to hang for up to ten minutes.
When running in authoritative mode and a study exists on multiple mounts, only the data on one of them was detected because the appropriate wrapper was not created in the database when the data was acquired and stored.
When rescheduling tasks from the user interface, the schedule command was not created, causing the collection process to encounter an error collecting the task’s properties.
Double-clicking the merge button when correcting an order to a study caused the function to be called twice. The second time it performed a normal study merge, displaying the study merge page rather than the correction page.
When loading a canned report template containing special characters, the special characters are not recognized and invalid characters appear in their place.
After updating Chrome, attempts to validate the farm configuration using a secure HTTP connection failed to connect to the other servers because the cross-origin embedder policy setting was not defined in the response header.
The service class column for origin and replica devices on the Devices configuration page was empty because the device class changed and these options failed to be extended to it.
A code-level option exists to suppress low-level log entries when locking a database table. Some low-level errors are handled at a high level, making the log entry unnecessary and misleading.
Attempts to export the data from the origin server failed because the migration tool that checks the replicator’s resources encountered an exception when collecting task counts from all servers. Additionally, the collection effort was running twice, unnecessarily. This has been corrected.
Based on timing, it was possible for a database connection to remain open when a worker thread closed, leaking database connections that would eventually prevent access to the database.
When the cache repository is in authoritative mode, stream servers might fail to locate a study in the cache because the repo handler issued a request to the stream server that could not run (because taskd doesn’t run on stream servers), resulting in the failure to download images in the viewers.
Cross-origin headers present in the JSP files caused conflicts now that Apache automatically adds them, resulting in a failure to fully decompress images.
A missing animation trigger cause some annotated values to remain hidden in the web viewer.
The Tasks page added a scroll bar to accommodate takes queues from many (registration) servers. Also when the stream server is configured to run in simple mode, meaning it does not run tasks, it is not present on the Tasks page.
When the farm validator runs, it propagates the role configuration file to all servers, which overwrites custom configurations on some servers. The unnecessary propagation calls have been eliminated and servers are set up to accept propagation calls from the application server, only.
On server farms consisting of many registration servers, database update requests could fail and exceed the maximum retries when attempting to register acquired objects in the same study. The number of retries has been increased and retries are put to sleep at varying intervals to avoid collisions.
The missing thumbnail panel setting has been added to the web viewer section of the user preferences page.
The temporary directory names created when processing objects were not unique, which doesn’t work in a shared storage environment. The temporary directory names now include a unique identifier.
If the image contains no image, meaning all pixel values are the same, the compressed IQ image is too short to split into individual streams, causing the download to fail.
The web viewer would successfully display an image but report exceptions if the window width or center were zero. The values were set to one to avoid the exception.
A function that checks the task queues on all servers running taskd failed on servers running the limited version of taskd, causing some processes, including reheat requests on stream servers and wormhole takeover, to fail.
Per-object access to the processed data (on tier 3 storage) when reheating and registering data resulted in poor performance if the underlying storage technology was slow. The process has been changed to aggregate the data into blobs stored on local cache to reduce tier 3 storage access.
When registering an object and the taskd session cannot be started, the object was moved to the study directory prematurely. When the Dcreg task finally starts, the registration task was sent to retry and when executed, found no data in the temp directory, exited and released the object.
When mapping objmeta file data to the object table fails, removing the references resulted in a memory leak which eventually created an OOM killer that killed taskd.
When deleting studies whose objects (blobs) have been loaded into the object cache and the object cache database table, the system deleted the cache data but failed to remove the database entry.
If an exception occurred while applying an Action to a list of studies, none of the studies would be marked complete because the status was recorded at the end of the process. To avoid reprocessing studies during the next cycle, completion status is recorded immediately after each study is processed.
The study details information for an entry on the Logs page were missing because a change to the internal date type was not applied to some configurations, and some incorrect COLIDs were used.
The MySQL connection can close, causing the java VM to crash, when loading the object table for a study with a large number (multiple thousands) of objects.
When running in data takeover (wormhole) mode, orders created on the Origin server are reproduced on the Replica (v9) server and include the .info file. When no longer in takeover mode, the v9 server does not support .info files. As a result, the server is unable to resolve an order properly. The server now recognizes this state and, if possible, selects one of the orders as the primary order.
The web viewer required a minimum of 16K bytes in the compressed data stream. When the streamed file was smaller than that, decompression failed and no image appeared.
REVERSIBILITY NOTICE: The new blob format (.f.ei4) is incompatible with older versions. If downgraded, processed (blob) data must be purged and regenerated.
The herelod’s compressed stream data failed to apply the initial quality calculation method. For small images, such as MR and CT, this resulted in thumbnail-sized (low resolution) images. When displayed at normal size, they were poor quality images.
If two PbR files exist in a broken study, the system does not attempt to clean it up. It logs the finding and leaves the study in the broken state so the Admin can clean it up manually.
If a duplicate object arrives while the original is part of a collapsible task, and the task fails, the duplicate was not being added to the collapsible task when the task was retried, as intended.
A cross-site scripting vulnerability on the custom logo upload function has been eliminated.
If the patient folder listed a prior study but that study is not available to the user because of an access list restriction, the system reported an error when attempting to display the report in the patient folder. The system is supposed to override the access restrictions on prior studies if the user has the right to view the current study.
The web service’s Report command fails if the request doesn’t contain the conditional Option parameter, or the Option parameter does not contain the conditional UpdateEditable element.
A debug log entry was created but failed to check the exception criteria, resulting in a misleading log entry. When the exception does not indicate an error, the entry is no longer logged.
A cross-site scripting vulnerability on a component of the techview page has been eliminated.
When multiple users submitted a request for a (study) list, the toolbox applied a global lock which resulted in a delayed response. The global lock for this and other toolboxes has been replaced with RW locks.
A performance improvement that updated the web viewer only when the view/image changes eliminated the trigger that activated the layout grid tool, disabling the ability to change the grid layout.
An object processed by a compress action was always re-registered, even when the object did not need to be compressed and therefore did not need to be re-registered.
When deleting a study using the infoCollector tree, for example when using the study cleanup tool, and the affected data does not exist, the system failed to detect the data didn’t exist and reported an exception.
After adding a web page dashlet to a user’s dashboard, an exception occurred because the shared module did not check for the presence of a conditional object.
The free space available on each mounted device was logged every five minutes but the calculator ran every minute. Now the calculator runs every five minutes.
When a person name field was defined to use an enumerated list of names, the system failed to ignore name formatting characters when checking for matches.
An obsolete feature for selecting the web viewer’s grid layout could be activated, leading to erroneous results. The obsolete tool has been removed.
After changing the grid layout to something other than 1-up, the link tool button might be drawn incorrectly because it did not exist in the original layout and was not accounted for in the updated layout.
After restarting taskd, the task is added to the database and when the relative priority is changed to high, the task name gets changed in the database but not the object. As a result, the task remains in the queue.
The options to delete immediately and keep the reports are mutually exclusive, yet they were both active when configuring a purge action. Now, checking the box to keep reports will disable the delete immediate and purge matching object options, and vice versa.
When the patient orientation is invalid, possibly due to rounding errors, it gets reset to the default, but that could cause the web viewer to cross-correlate unrelated images. To avoid this, less significance is given to the individual vector values, and if applicable, the frame of reference is ignored.
Media creation tasks remained in the retry queue after a media session was canceled. The tasks were intended to detect the database records and media directory were removed but encountered an exception when checking the existence of the directory, causing the task to go to retry.
A prefetch action using a compound list that includes a group by union resulted in a database exception because prefetch requests use conflicting group-by directives.
Moving a dashboard from one page to another on the dashboard configuration page was saved only when the dashlet was created. Moving it afterwards appeared to work but the change was not saved.
The application server’s error log contained warning messages about non-applicable components failing to run.
When copying profile settings from one user to another using the GUI’s copy profile tool, some toolbar locations and shortcut tables were not copied completely.
Authoritative mode introduced a new status code that the data repository uses to indicate a missing folder. Some parts of the system failed to recognize the status, resulting in errors rather than exception handling, particularly when creating orders.
If an error occurred when adding a note into the patient folder, the message disappeared after the user acknowledged it but a failure to clean up some data field prevented the user from trying again to enter the note.
Multiple refresh page buttons appeared on some table pages, including the media export page, only one of which performed a page refresh.
Displaying an uploaded PDF file failed on a server farm because the call to extract the document contents, which is performed by a registration server, was issued on the application server.
If the export media table page contains hidden fields, some of which might be included in the default list, the user might receive an error message.
An incorrect character encoding value caused some viewer configuration panel labels to contain invalid text characters.
The ability to set the status when closing the viewer required both report editing and status setting rights. This has been corrected and only the status setting right is required to set the status when terminating a viewer session.
Streaming connection calls returned timeout exceptions instead of disconnection exceptions, which caused the stream server to leak threads. After receiving multiple timeout responses, the stream server checks the connection directly to see if it’s closed, and then releases the associated threads.
Key images failed to appear on web page reports because the processed data did not include the compressed image data.
UPGRADE NOTICE: This change applies to cached data. It requires reprocessing affected data. The web viewer did not consider pixel spacing information from all possible locations, specifically the ultrasound region sequence. Without pixel spacing information, the web viewer displayed measurements in pixels rather than linear units.
The farm validation tool failed to acknowledge repository mounts are read-only when the system is running in takeover mode, thereby generating an invalid warning.
When using an existing user account to create a new one, the system ignored the mandatory field check and could create the new account with missing required fields.
Some secondary capture objects did not contain the orientation data needed to render the object, and the web viewer failed to generate it, resulting in no displayable image.
Attempts to display results on the monitors page for a remote server failed until the remote server runs the monitor page locally.
Action configuration returned unsupported data types that caused the DVCL engine to report an exception.
When a DICOM Q/R SCP returns multiple retrieve AE titles in the response and none of them match devices registered in the system, an exception occurred and the retrieve failed to complete.
When the same object is sent to the server multiple times, a racing conditional could leave the study in a partially cooked state.
If one of the XML entries in the user import file was structurally invalid, the system stopped processing the import. Now it logs and skips the user with the affected record and continues with the next entry.
A new study shows an initial state as frozen rather than cooking before the first registration task completes and the study entry gets created in the database.
The Group Open tool erroneously included orders in the prior study list. These have been suppressed. Note that completed orders can still be opened if it is listed as the primary study in the session.
Dynamic allocation of buffers in web assembly could invalidate previously allocated buffers, resulting in an exception.
Action lists failed to retain the order of studies when new studies were added (including processing subsequent batches) because the date stamp didn’t provide the necessary resolution and processing some data modified the add date value.
When a graph was requested from the server before the server collected any data points, an invalid (large) initial value could be displayed by default.
A synchronization issue occurred when generating the error to a web services command, resulting in an unnecessary log file. The processing order has been corrected, the correct error is reported and no log file is generated.
When running in takeover mode and the user is logged into the Replica server, presentation states fail to get saved on the server because the upload failed to locate the Origin server. Additionally, the request always returned a success status, meaning the user was unaware the operation failed.
If creating DICOM media on a frozen study, the herpa data is unavailable. As a result, the media is created without it and the data is processed when the user opens the viewer and loads the study.
Editing a study’s report multiple times resulted in leaked study references that would not get cleaned up, causing a study’s state to remain in the cooked state even when reheating.
The query qualifier was tripping for result counts far lower than the configured thresholds because the join table failed to include distinct criteria.
Re-sent objects whose registration tasks exited because the objects were unchanged and didn’t trigger a prep study task could prevent the final step of the previous prep study task from running. As a result, cooked studies lacked herpa data.
When checking for orthogonal orientation, a floating-point comparison of near-zero values erroneously reset and removed the localizer lines.
After deleting a series or image from the technologist view page, the images might appear in the main viewer when the study is loaded if the open request is issued before the prep study task completes.
When scrolling an image in the web viewer when the cursor was situated over a thumbnail/cross reference image, scrolling stopped because of a missing mouse event.
Carousel images on the technologist page were missing because the cookie session ID was not passed through to the stream server.
When creating media from studies existing on different hub servers, a valid yet irrelevant exception gets logged indicating a failed attempt to generate an ID for an auto-increment field.
The Apache monitor could fail if it issues a query for a thread name after the thread has already terminated.
The farm validation page didn’t recognize the color scheme setting and presented dark text when using dark mode.
A separate thread is spawned to handle gwav decompression in the web and tech viewers. In some cases, additional threads were spawn unnecessarily and the existing ones were left unmanaged. Additionally, buffers exchanged between the threads were not released correctly, resulting in a memory leak.
The task page limited the amount of task data it could display. When large task queues existed or when consolidated task data from many servers was abundant, the buffer size could be exceeded, resulting in an exception and truncated results.
After adding a custom database field, no localized value exists in the resource file and that generated an error message in the logs.
An error message was mistakenly created in the weekly log when the user changed some preference settings.
Hounsfield annotation was performed on the server using the raw data files. Since v9 eliminated raw files, the tool fails, resulting in an error message in the web viewer. The annotation feature has been refactored to use client-side image data.
If a system message occured when the user has a curtain (popup) panel displayed from the Preferences page, the message content was empty.
The repository handler had debug logging enabled by default. It has been changed to be disabled by default.
The stream server’s packet assembly process could get stuck in a loop when the last entry has multiple entries for the same file and processing for the file fails. The status is not propagated to other processes of the same file, causing these other processes to wait endlessly.
Insufficient checking of a return code permitted some system lists (ie, owned by the @system account) to appear on a non-admins saved filter list.
The warning message about password strength indicated a feature that is no longer supported. The message has been updated to reflect the current solution.
Some tools available on the report page broke when the profile file format changed to XML. These include the field to show the radiologist, to show the transcriptionist and to select the key image size.
Log entries indicating an action was performed on a study included in invalid, fixed-text indication that the study state changed.
Compress action tasks can fail and go onto the failed queue if the study is purged before or while the task is running.
DICOM media requests, from any source, that specified a series or object that was not part of the default – typically, the first – series or object failed because the assigned directory identifier was defined using the default’s ID. Since the default’s series/object was not present, the directory could not be located when building the media file.
A function used to display the results of a search didn’t check the user’s permissions, allowing someone to misappropriately use the URL to access restricted data.
Task page filters on the name fields returned no matches. The filter function on the Tasks page supported simple text filters only. Now they support the more complex name filter as well.
Timed-out database connections in idle stream server threads could result in a (regserver) crash when multiple stream servers start running again.
Unprotected thread handling around database connections could cause system components that use the repo handler, including taskd, apache and regserver, to crash when they run after being idle for a period of time (about eight hours, or longer than the mysql wait timeout period).
When acquiring objects, the study directory was created before the coercion rules were applied. If the coercion rule instructed the system to drop (i.e., not register) the object, an empty study folder could persist. The system now creates the study directory only after it knows it’s going to register the object.
The scope of the dirty flag handler changed when the system started caching repository handler instances. It now has to check whether other threads modified the dirty file. Additionally, an unnecessary smart semaphore lock, created when the dirt flag handler accesses its own cached dirty flag, has been changed to a simple memory lock to avoid a strain on resources.
A cached database connection providing efficient access from the repository handler was not being used by the check overload function.
Processing a wormhole (data takeover) notification to delete a study which does not exist on the Replica server failed because the study’s meta directory didn’t exist. As a result, subsequent notifications from the Origin server could not be sent.
The absence of a default value for the Prepared Study database field resulted in it being assigned NULL for each study registered through the wormhole (data takeover), preventing the column from appearing on the worklist.
DEPENDENCY NOTICE: This fix requires an Origin-side fix. For v7.2, the fix is in 7.2 medley-97. A Replica system received sync messages from multiple hubs when the study was broken (ie, resided on multiple hubs) on an Origin system. One message indicated additional objects exist. The other indicates objects and even the study were deleted. Depending on the order these messages arrived at the Replica, some objects could remain unregistered.
A change to the file name extension of compressed data files was not applied to blob file lookups, causing requests to download JPG images to fail.
The tool used to parse meta data objects ignored empty trailing fields, truncating the data when it was updated. The truncated data caused data import (during takeover) to fail.
When the training fields in the object meta files are empty, the system truncated the data resulting in a failure to read them.
Studies on an origin server with a process mode state set to Store failed to create cache or processed data on the replica server. Since v9 always processes data, the setting is ignored during takeover.
If a group open request includes an order, the viewer loads the studies, including the order, but the images failed to show up because the order contained no blob data, and it halted the streaming of all images.
If a server farm consists of multiple servers but does not include a load balancer, typically because only one server is used to perform each defined role, an uncommon but valid configuration commonly used in validation testing, the intracom client failed to run because it required the presence of a load balancer.
Reheating and reindexing a study during data takeover could find and register temporary files from the study directory, causing duplication of some objects.
Some special characters, including apostrophe and backslash, in text strings were inserted into the database preceded by a backslash. When displayed in the worklist, the extra backspace character appeared.
UPGRADE NOTICE: This fix applies to new data only. Existing studies affected by the bug must be cleared from the object cache table. If the last field in the object table data is empty when the object is swapped out of the meta database, the data was truncated and the subsequent load would be aborted.
When in takeover mode, the replica server failed to create orders the origin server sends over because the replica server, whose storage is read-only, attempted to create the folder.
Modifying a filtered list assigned to an action caused the action to mishandle the current content setting, causing the action to be applied to existing data regardless of the setting.
When in takeover mode, users and the system are unable to create an order on the replica mode because it attempted to create the study repo itself rather than passing the request to the origin server.
When reheat tasks timed out, they were treated as generic registration errors and sent to the failed queue rather than the retry queue.
A mishandled parameter in a call to remove a study from the action processing table prevented studies that no longer match the filter criteria from being removed from the table, consuming resources indefinitely.
The data structure for storing the result of a MySQL query was unbound before executing the query, causing a write to be applied to undefined memory space, resulting in a crash of taskd.
If the studies on an action list do not change between action events, the action fails to execute because the check for an empty array was performed before the list was converted to an array.
Some monitoring tool, specifically Time of SQL Query (s), Memory Usage and Memory Usage Actual, mishandled the input data format, resulting in invalid or no output graphs.
Importing users and groups from v7.2 failed because the group table name changed, empty table checking was missing and the action filter table was missing an ID field.
When applying a (worklist) table filter by dragging a value into the filter criteria area, the COLID could be missing, resulting in an exception and a failed query.
When applying compound lists, it was possible to miss records that satisfied one of the lists but not the other if the second list included criteria that excluded the records on the first list. By handling the query as a union of separate lists, all matching studies are included.
Report submitted from the viewer failed to insert the Study Date and Study Time values since changing the database date/time record format.
A security vulnerability occurring when using the forgotten password feature has been eliminated.
The option to exclude devices in the device import script, importdevices.sh, mistakenly applied to DICOM devices only. Now all devices are checked against the exclusion list.
Some calls to retrieve a repository’s absolute path failed if the repository root itself was a symbolic link.
When the repository root is the same as the repository mount point, temporary files are placed in the repository root directory rather than the tmp directory. When the object was moved to the repository, the system attempted to remove the file from the data tmp directory, instead of the repository root directory, leaving unmanaged files in the data tmp directory.
The initial quality (IQ) images were saved to the processed repository on (slow) tier 3 storage rather than the local cache repository (on fast tier 1 storage).
When a study is processed in parallel on multiple threads, locking controls were inefficient because they took a long time between retries.
An open viewer with an open patient folder panel whose viewer session has timed out issues refresh calls, resulting in exceptions recorded in the error log.
When an SQL exception occurred while searching the database, it could be mishandled and, in some cases, clear an action’s “done” list. The next time the action performed the query successfully, all the studies would get (re)processed.
When creating a new web service device, the default user from an existing web services device would be inserted at the default user of the new device.
Idle stream connections were entering a sleep mode that was not yielding sufficient CPU cycles.
If purging is enabled for an NFS shared drive, makespace() failed to run because the tool used to collect the disk usage data does not work with NFS-mounted devices. The tool now uses the mounted directory instead of the device.
Under some conditions, perhaps mostly when reheating a study, when the system completes multiple tasks within a short period of time, the task counts were not updated correctly and some completed jobs remained visible on the Tasks page.
When cache processing encountered an error, the error was handled correctly but the status was set to Cooked, regardless. Error values are checked now, assuring the status reflects the processing results.
No solution was in place to trigger reheating a cold study after acquiring new objects.
Studies with no images were excluded from the cooking process, even though they require herpa data and empty blobs before a user can open them.
When passing information to the viewer about the next/previous studies on the user’s worklist, the server mishandled zero-image studies. As a result, the viewer might incorrectly disable the next/previous study buttons.
The action filter states in the Other Lists filter panel page have been changed from a text field to a list of enumerated values.
Some toolbox functions call themselves redundantly, leading to a possible global locking issue. These locks have been changed to local locks to eliminate the possibility of a lock up.
The number of connections a stream server supports has been increased to 8192. Note that each connection requires two threads, making 4096 the maximum number of simultaneous user connections.
None of the viewer files, including the viewer executable itself, were copied to DICOM media because after moving the media option settings to the database, the setting values were not converted properly to Boolean values and therefore misinterpreted when creating the media.
The access key inserted into the PBS file (to support stream server session authentication) was put in the wrong location, causing the viewer to misinterpret the study list when initiating a new session.
When configured as a server farm, the stream server and application server are separate and the session ID managed by the application server is unavailable to streaming connections. As a result, web viewer access from a stream server could not be authenticated until this change, which passes the session ID in the streaming protocol.
The preferable web assembly code failed to load because the web viewer interface was missing a MIME type definition, forcing the web viewer to fall back to a sub-optimal technology.
Redundant and time-consuming calls to obtain an object’s repository location were removed because the study location doesn’t change.
Herpa creation tasks preparing a study for cooking recursively locked the cache repository, causing timeout delays.
Tasks that restore an object table record from the meta data could crash deep within JNI when invoking a gRPC client in JNI after database operations have also been performed in JNI. To avoid the situation, object cache mapping has been reimplemented in C/C++ to avoid invoking gRPC from JNI.
The inclusion of an unnecessary session ID when calling the PDF creation tool cause the conversion script into an infinite loop and ultimate failure when creating DICOM media.
The inconsistent order of locking and unlocking of two different locks when reprocessing a study’s data could cause the task manager to become deadlocked.
After increasing the default MySQL connection limits (see HPS-371, released in this build), it was determined a single default is not sufficient. A better connection limit default for the system was the original 4, so this setting has been restored. Default limits for Tomcat and Hermes are now set to 32. Also, the connection pool now creates connections as needed meaning none are initialized by default. Default limits for other java VMs can be defined using the override file ~/var/conf/modules.xml. See Jira for a list of affected java VMs and configuration details.
The pattern used for matching MySQL’s version number changed in the current version, resulting in invalid error messages in the log file.
Study row selection on the worklist could become inconsistent, resulting in misapplication of a batch tool.
Merging two or more studies into a new study which is then merged with a different study, followed by a delete request could leave invalid state data in the database due to a missing lock when processing the merge and delete requests, preventing the registration of the original studies if resent.
While the load balance server doesn’t use the remote database or a local database, it does generate logging data and that data is logged in the global database. As a result, the load balancer server requires the mysql component.
Media import had not been updated to support the server farm roles, attempting to upload the data to the application server for processing. This feature has been updated to upload the media data to the shared temporary repository and the command to perform the import is submitted to the registration server.
Exported worklists could be downloaded without an active user session if the user manually constructed the applicable URL in a browser window.
The updated DCMTK toolkit changed its behavior processing the sample per pixels value defined in YBR_FULL_422 multi-frame objects, resulting in an error calculating the full image size. A workaround has been applied that intercepts affected image objects and calculates the full image size as defined by the object.
Object level log entries were incorrectly included in the log database. This has been corrected so they appear in the forever logs only.
The load balancer server’s configuration used hostnames rather than IP addresses, which won’t work at sites which are not set up to resolve FQDN. The generator script now uses IP addresses when available and falls back on hostnames when not.
User-initiated study delete requests could cause taskd to lock up when a deleted task attempts to add a new cleanup task.
A recent bug fix prevented a RIS user from opening Completed orders in the viewer or web viewer. Support for this behavior has been restored.
When installing a server from scratch, the hyperdirector RPM is pulled in as a dependency but isn’t started, causing a failure during startup since it is expected to be running.
Failure to pick up a modified environment variable before starting the hyperdirector caused the server validator to fail.
Changes applied to user session management within a server farm were not applied to the performance monitor page, resulting in an exception.
The spinner graphic displayed in the terminal window when running the startup script dumped multiple newlines on the screen because the animated character required multibyte character set support, which wasn’t applied by default. The character has been replaced with three dots to indicate the task is in process.
When the user changes some settings, a session refresh updates those settings so they take effect immediately. Changing the assigned viewer was missing from this list of settings. As a result, changes to the applicable viewer didn’t occur until the user initiated a new web session.
The Move Left button on the group member edit page was placed at the midpoint in the group list. On systems with many groups, this placed the button off the initial screen. The button has been moved to the top of the list.
When editing a notification action assigned to a system list, the target email list appeared blank rather than listing the notification recipients.
If no default document type was assigned to the server, the attachment upload GUI did not filter the other settings on the page, allowing users to assign unsupported combinations of settings and causing some uploads to fail.
A recent change to display reports in the worklist patient folder was applied too broadly, affecting old style indexing used by the viewer’s patient folder, making external reports unavailable from the viewer's patient folder window.
Changes in the DCMTK toolkit allowed the system to generate UIDs longer than the maximum field size. The algorithm for generating UIDs has been modified so all UIDs are unique and within the permitted length.
The repository handler uses database locking but when it cannot connect to the database, it results in an uncaught exception that screws up the locking mechanism.
When reprocessing a study containing no processed repository, the jit processing routine erroneously created legacy thumbnail images.
The system default calculated fields for the Corrected state (p0000) and Report Exist state (p0002) failed to appear in the configuration page, could not be modified and were unavailable as a worklist column. The data field types changed but the new types were not handled by the database.
While an empty email address is optional in some notification email configurations, it is required when the list owner is the system account. In these cases, users are prevented from activating the action until an email address is provided.
A client side exception occurred when the user logged out immediately after logging in, before the worklist could display.
When a worklist refresh occurs (manual or automatic) while editing a report in the patient folder on a worklist on which a study disappears, reordering the worklist rows refreshed the report edit page as well, clearing report data that might have been entered.
The user account lock status and the login details reported incorrect information when the user selected the account by checking the selection check box at the beginning of the row. The information was also inconsistent when multiple accounts were selected.
When loading a dashlet, an exception could occur after login due to failure to check for an initialized variable.
When a user-initiated task-related action, such as changing the priority of a scheduled task, incurred an error, the return code was mischaracterized, leaving a lock in place. As a result, new tasks would not run.
Autocorrecting studies to orders using patient name as matching criteria and a patient name containing an apostrophe resulted in a search exception and a failure to autocorrect.
The Partially Inaccessible indicator on the worklist could report an incorrect state if the user clicked the More button to display additional studies while the system was still acquiring them. The call to set the state failed because the page did not handle the request.
The startup script returned the global result variable rather than the local result variable after starting each service on each server in a server farm, resulting in a success status code even when one or more servers failed to start.
Global restrictions were unintentionally blocking non-study related data from the Logs page.
When upgrading from v7.2 to v9, worklist filter lists whose names are too long to fit in the database are skipped, with a warning displayed on the console window. A problem in this handling caused the subsequent list to be ignored, causing the upgrade to drop it as well, without any warning.
The hyperlink in a notification email that launches the images in the web viewer was missing the login prompt. If a browser did not have a valid session cookie already, the web viewer failed to load images.
REVERSIBILITY NOTICE: This change requires regenerating all processed and cached data to a format which is incompatible with previous software versions.
DEPENDENCY NOTICE: This change requires viewer 9.0.4.4 or later.
Under very specific and unlikely conditions, the compression algorithm could encounter a matrix boundary condition that caused the compression effort to fail, resulting in no processed image.
The algorithm for resizing images to fit them in the available web viewer frame might terminate prematurely for images whose size is not a power of two. As a result, the image was improperly resized and blurry.
The technologist view page failed to disable forwarding, editing and deleting partial studies residing on multiple mount points.
Removing the processed data repository failed to change the processing status to frozen because the state change was not applied during the callback.
The user name filter drop down menu used the user name and user ID interchangeable, both in display and database query, leading to confusing results. The field permits users to enter user names and both the user name and user ID are displayed, but when the command is invoked, the value applied is the user ID.
When some non-visible characters, such as the left and right arrows, were entered into a user name field drop down panel, such as when filtering on the user name, the web page triggered an unnecessary call to search the database.
Some default items in the user name drop down menus are supposed to remain, even when the type-ahead string is applied, but they were filtered out.
When clicking out of a text entry field when configuring the default user in a web services device edit page, the style sheet was cleared and when clicking back into the text field, the user’s custom color setting was not applied.
On the manual forward setup page, the user name field (when forwarding to a folder) could become obstructed by the popup menu.
The report selection tools in the report view in the worklist’s patient folder were functioning incorrectly: the color scheme was hardcoded to dark theme; the report component icons failed to select the corresponding report component; and the delete button was highlighted instead of the report component icon. In addition to resolving these issues, a new button, Open All Reports, was added to load all the report components into a single view.
REVERSIBILITY NOTICE: To downgrade, the plugin license(s) must be regenerated.
The mammography, volume 3D and fusion plugins’ short names changed from the ones used in v7 so after upgrading, the plugin license was not recognized.
The tag list available when configuring calculated fields was unsorted, making it difficult to locate a specific tag.
A change to the time’s short format handler did not handle requests for negated search criteria.
Images having an aspect ratio other than 1:1 caused the technology view page’s carousel to show partial images and the scrolling tools to fail. They also rendered the page’s thumbnail image size options useless.
Attempting to open a study while it was still being acquired across multiple registration servers resulted in a race condition, causing the herpa data in the blob to reference more images than have been processed.
Attempting to collect the information in a PbR object failed from the app server because herelod only runs on the registration server. This affects some web services commands and other features such as editing a study from the worklist. A new intracom service was introduced to get PbR object content from a registration server.
A fix to the user manager added an unnecessary call to prompt for a login when loading the technologist view page or the web viewer page.
Uploading attachments to studies or orders completed without error but the attachment was not saved. This was due to a corrupted environment variable extended by the MCS component’s control script.
When collecting study data failed, the result did not contain a proper error, resulting in an exception.
While linked repositories are not recommended – mount points should be linked, not repositories – the configuration is permitted. When present, the system did not always attempt to resolve the link, resulting in failures when checking the study state.
GUI-initiated requests to reprocess or reheat a study were always performed by a single registration server. Now the system allocates these tasks in a round-robin fashion to distribute the load.
The local cache repository and its default configuration files are created during startup by the cases ctrl script, but the cases ctrl script isn’t invoked on the registration or stream servers. This function has been moved to the dcviewer ctrl script.
When toggling between the Security Settings page and other server configuration pages, the security page contents may refresh and overwrite the other page’s data because an asynchronous call might have taken too long to complete.
When adding cw3 support to the web viewer and technologist view pages, some new javascript pages were not included when running in debug mode.
The indicator on the user accounts page showing a user is logged in failed because the timestamp field type was changed in the database but the check wasn’t updated accordingly.
A retired function called when manipulating a report, such as unfinaling a report or removing an addendum, resulted in an exception. The retired function has been replaced with one supported by v9.
When a DICOM AE requests a study using a DICOM Retrieve request, the forward tasks could fail to apply the soft edit changes causing the data to be sent without the latest updates.
When using the local MCS service from a worklist server to create DICOM media containing studies from two or more different hub servers, the temporary directory names created on the hub servers did not always match the directory names on the worklist server. If the names were not unique, the conflict resulted in missing files. Additionally, the MCS started constructing the DICOMDIR file after the transfer from the first hub server completed, without waiting for transfers from all hub servers to complete.
A change to handing Boolean fields in the database was not extended to the user account lock state field, causing attempts to unlock a locked user account to fail.
While users are not supposed to open order or zero-image studies, requests to do so can occur and are handled. But the stream server failed to process these studies, resulting in a hang when attempting to open the viewer.
When the top item in the future queue was a prepstudy task for an active dcregupdate task, the task was postponed but the system failed to remove it from the top of the queue. Since the task manager only looked at the top item in the queue, task processing became deadlocked.
When the stream connection encounters an exception, such as an unexpected SSL exception, the viewer attempts to reestablish the connection by issuing a fast-connection token, but the server returns an invalid response, hanging the viewer as it waits indefinitely for the appropriate response.
The streaminfo log file was not rotated and continued to grow. The file has been added to the forever log rotation schedule.
Concurrent writes to the stream channel caused by the inclusion of streaming metric data in the data stream resulted in data corruption on the channel. This has been mitigated by submitting synchronous responses to incoming commands on a dedicated outbound queue. Additionally, a mechanism is in place to limit the data packet size. This control setting, if needed, would be assigned by the viewer.
When a hub server is backed up, the command to purge an order across the dotcom after correcting it to an image failed to propagate. As a result, the study could not be opened from the RIS because the search for the study returned multiple items (the study and the lingering orders).
Worklist filters for name fields, study size and multi-value fields have been updated to support features available in earlier versions, including the ability to search on individual name components.
The length of enumerated values assigned to a database field were not checked and resulted in unexpected values and results. The length is defined on the setup page and values lengths are enforced before saving them.
Uncaught exceptions coming from Internet Explorer were not handled properly, resulting in a web page exception.
The method used to open the help pages in a new window blocked pop-ups by default. The setting has been changed to allow the new tab to open without user acknowledgement.
In a dotcom where the master is the child server, a report edit could get processed on the child and propagated to the parent before the parent registered the original report. If a report notification event arrived at the parent before the report was registered, the event notification failed to trigger before all the fields were updated.
Given the special handling of static user accounts, such as the system account, the user account export script failed to export any information. The script now ignores static accounts.
The Study Changed field did not recognize saving report objects. As a result, the study fingerprint wasn’t updated and the change state remained untouched.
When creating or modifying a DICOM device entry using a duplicate AE Title and the user decides to ignore the warning and save it anyway, the software failed to apply the change because the override flag was ignored.
When a study update and study acquisition event occur within the same period, the study acquisition notification message could be suppressed due to the message reduction process. Now, study acquisition events are no longer collapsed with study update events.
The mechanism used to reconnect the persistent database connection was not implemented, resulting in database access errors.
Studies with compressed images greater than 1MB displayed corrupted (noisy) images because processing failed to buffer pages correctly.
The task manager could stop sending order notification messages to web service devices if the web services device is inaccessible and a message task was sent to retry but then deleted or suspended. When the web services device is again accessible, future messages would be collapsed behind the deleted retry task.
After correcting the importation of custom worklist layouts when upgrading from v7.2 to v8, the action buttons and lock indicator were dropped because v7.2 does not store them in the worklist configuration. By default, upgrades include the default v8 worklist action buttons.
If a saved worklist contains conditional coloring on a hidden column, an error occurs because the coloring tool cannot locate the column and the worklist appears as an empty list.
A user account’s password settings could be applied after making temporary changes to the account’s LDAP settings, even when the account was configured to use an LDAP authentication agent.
Pressing the More bar to display the next page of worklist entries could result in duplicate rows if the user has no default worklist defined and is in a group with an unsorted, unfiltered default worklist defined.
Processing a late-arriving object resulted in reprocessing existing objects’ initial quality blob (thumbnail) data because the herpa creator did not yet check for existing data.
The web viewer failed to launch on a Hyper+ farm system in which the stream server runs on a different server than the application/web server because the web viewer was passed only the web service ports and not the full server URL.
Processing large objects into blobs could result in corrupt data due to a missed lock.
The java component upgrade applied in Hyper+, replacing the old unix socket implementation, does not support the same socket options. When attempting to forward studies under certain conditions, an unsupported option caused an exception and the request failed.
Report view templates using a field with the VR of SI, such as the Interpretation Status ID field, would log errors and display the raw data because support for the VR type was removed.
Copying a study to the worklist folder failed because the data directory was not created, a result of moving the storestate.rec file from DICOM repository to the meta repository.
UPGRADE NOTICE: This change invalidates all blob (processed) data in the cache. A data value overflow condition existed in the blob header when the blob size exceeded 2GBs, causing blob creation (processing) to miss some images.
The taskd client canceled the keepalive timer when terminating the connection which prevented it from being restarted, causing the failure of reprocessing and reindexing requests.
When requesting to clear the cache from the Tech View page very quickly after loading the web page, a maintenance procedure might fail to complete before starting clearing the cache, resulting in an error and the cache data remaining in the repository.
When registering the PbR before the image objects, the value of the Date field could display the PbR’s creation date-time rather than the image object’s study date-time because the calculation of the Date field from the minimum SOP instance was not performed.
A low level lock timer created a condition that limited the number of times the system could attempt to release a reference counter, yet during certain real world scenarios, more attempts are needed. As a result, reference counts were not released, causing an inconsistent state in the data.
Some documented MySQL exceptions occurred but the recommended solution – retry the query/update – was not applied.
Multiple collapsed prepStudy tasks could exist in the task queue at the same time due to a racing condition when creating these tasks.
prepStudy tasks in the retry queue could not be terminated by the post-collapse cleanup function.
After upgrading Chromium (used by Chrome, Edge, Safari and other browsers) to Version 106.0.5249.103 or later, some of the browser’s drag and drop features such as applying a worklist filter from a column header or column value corrupted the web page contents resulting in a disorganized layout.
A change in the DCMKT toolkit required connection timeouts to be assigned earlier than they were. As a result, all but the first send request and all the receive requests used the built-in timeout value.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
If the Institution Name value contains an apostrophe and the field is used in the filter criteria applied on the worklist, the open next/previous study command results in an exception due to an improperly formed query.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
The system ignored global restrictions when selecting the next/previous study. If the user does not have permission to view the study selected, he/she ends up with an invalid study error.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
A typo existed in the term “ForwardStudy” in log file entries for a web services study forward event.
Newly created document types failed to show up on the document type configuration page until after a browser page refresh.
The viewer version number in the uploaded viewer logs was incorrect because the data was taken from the wrong object.
When a registered viewer device issues a cache state request to the server and the server does not have an explicitly defined prior study cache state setting, the parsing algorithm misinterprets the parsing results and registers an unnecessary exception in the log file.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
The initial dotcom setup processed failed to include the server’s self ID setting in the default configuration file. As a result, the support account was not recognized.
A reorganization of the component start up scripts broke the setup of the default pb-scp configuration file, resulting in appending the wrong defaults to the end of the configured settings which were then taken as the configured value.
Actions failed to run because the path used to identify the curl script used the removed custom component. The path has been updated to use the OS-supplied curl tools.
The persistent database connections would not reconnect if the connection was lost or timed out, resulting in retried attempts to register objects, among other incomplete database requests.
When attempting to acquire and register large numbers of objects in a short period of time, herelod processes failed to terminate cleanly, waiting unnecessarily on the release of conditional variables, resulting in failed registration tasks and dropped objects.
After upgrading java, an incompatible JAX-WS file caused all web services commands to fail. Upgraded JAX-WS to version 2.3.5.
The location of java has moved but the path variable using in multiple scripts, including the user account import and export tools, still pointed to the former location.
Persistent and non-persistent database connections would release the SQL library object when terminating, invalidating the persistent connection and causing unstable behavior in other threads.
The updated version of MySQL, using the carry-over settings, treats truncation as an error as opposed to truncating the value automatically. The settings have been updated to default to the previous truncation behavior.
A study with no cache or processed data directories, e.g., a study with just a PbR object, encountered an exception when preparing the meta data because of a failure to examine the return code value.
If the user manager runs on an independent server, login attempts fail because the software was not properly prepared to pass data between discrete objects correctly.
Clearing cache from the Technologist view page deleted the data but failed to update the internal processing state value because the feature to track processing status across multiple servers was not yet implemented.
Reindexing a study fails when the ingestion and application components run on separate servers because the application server has no registration abilities. The registration request is submitted to one of the registration servers.
The stream server the viewer uses to download the data is defined in the herpa data but the early v9 viewer does not use the value yet. Until this is available, the server will leave the setting empty if it determines the stream service and the web service are running on the same server, forcing the viewer to fall back on its assumption they are the same. Note this solution only works when running stream and web services on the same server. When the services are separated, an updated viewer is required.
Static user passwords failed to account for the updated hash format.
Updating the password hash format missed a few places, including the Change Password page.
Improper handling of a return code resulted in clearing the action history file when a database query encountered an anomaly or simply failed to complete.
Parsing the date-time values in patient folder notes assumed a 12-hour clock rather than a 24-hour clock.
Correcting a study to an order might fail because orders don’t have an owner hub, which is required to determine where the combined study resides. As a result, manual corrections from the GUI reported a failure to the user.
A missing RMI call caused the server correcting a late-arriving order to a study to not update the study data with the order data if the correcting server was not the study owner.
If the shutdown process encountered an exception, which could be legitimate depending on the timing/sequencing, it could exit before terminating hermes.
Back-end support for forwarding studies from the patient folder was incomplete, resulting in no action when the button was clicked.
The query time reported in a slow query log entry was in seconds but tagged as being in milliseconds.
User accounts with empty passwords, which is not a valid state, could not be corrected because the missing password was not handled and caused an error when editing.
Duplicate tasks that weren’t collapsed were not returned to the duplicate task map, causing them to remain in the retry queue until executed. Under certain conditions, the retry queue could grow large with unnecessary duplicate tasks.
While looking for plugin license files, the system failed to recognize plugins that were not distributed as DLL files.
Identifying the email address offered as the default when configuring a Notify action failed for internally-defined accounts, such as the system account. In this case, the default comes up empty, requiring the user to explicitly declare the email recipient.
If the tasks page’s filter panel was open when the user called up a different web page, the filter panel was not closed and remained on the screen.
Viewer sessions were not recognized after restarting Apache, causing the cached data fields (eg, Percent Loaded) to report no data until the user logged out and back in.
Searching a worklist using a date value in the quick filter field that would result in a query qualifier exception returned an error message and no data because the date filter was improperly encoded in the database search request.
The compress data action used a static pathname to the study rather than using the repository’s location finder. If the study was moved from its original location, the compress action could not locate it and therefore failed to process the data.
A system lock failed to be released because of a missing constructor. The constructor has been added to avoid the stuck lock. Also, when a user attempted to break the system lock, which is not permitted, the user received no explanation of why the lock remained. Now the user is informed he has no permissions to break a system lock.
When using a custom port to launch the web viewer, the server would mishandle parsing the URL to locate the host name, causing the request to fail.
The plugin licensing enhancements failed to recognize custom plugin modules because the new naming rules were not applied correctly to custom plugin file names.
When using the web services command to create a user and including the password option tag but specifying no options, the request would fail because the server could not parse the empty string from the request.
The delete button was not available from the patient folder if the study contained more than one report object (i.e., there was an addendum to the main report).