The PACS Server
![]()
Splits (and merges) are prohibited on the replica server, but the tools were enabled on the user interface.
The monitor script checked for data from the common database rather than the local database, which took longer and didn’t contain all the data.
When a callback hook is triggered with notifications from both Dcstudy and Dcreport, an exception would occur because the query generator added a table to the list twice.
When the user manager was not set up to run on a separate VM in a farm configuration, so when it was deployed that way, the system failed to collect a list of user backup files.
Person names presented on the edit page could ignore the configured name format if they do not contain four explicit name components as defined by DICOM.
The repoAge tool returned the same result whether checkoverload deleted data or not because the log check failed to find the relevant event.
A high priority task could run again after restarting taskd if the task’s priority and name are changed in the DB but not in the object.
If the repository location table contains a reference to a non-existent mount, taskd and apache could crash because the mount was not verified before attempting to access it.
Added controls preventing access to HTML source data in the viewer’s report panel to eliminate cross site scripting vulnerabilities.
When collecting data for a worklist that no longer exists, the system returned the same result for an empty list as it did for an error, leading to inaccurate results. This change includes adding a full stack trace to the info.log file when an action fails to run.
Some java calls that do not go through the wrapper do not call back to the wrapper to use the database, leaving to poor performance accessing the cache and processes repositories.
The query qualifier returned false warnings when opening a folder and when evaluating some compound list filters.
Restarting taskd from the GUI might not wait for taskd to terminate. As a result, taskd could end up in a state where it was running but its identifier was not recorded, making it appear it was not running.
File types are checked when uploading attachments from an admin account to eliminate injection type vulnerabilities.
A software generation script was overwritten when performing a maintenance operation, causing the web service component to fail to start.
When a changed action list affects a very large number of studies, a database query might exceed the maximum command length, resulting in a query exception.
A vulnerability in the password reset function has been closed.
To use memory efficiently, compressing multi-frame objects for export allocates memory one frame at a time rather than all at once for the entire object.
The web viewer failed to load an image with a presentation state containing save annotations due to a parsing error in the study XML file.
Missing role classes caused the stream server to crash at startup.
A change in the URL decoder tool changed the way empty notes are handled, resulting in an exception when attempting to display an empty note in the worklist patient folder.
Reheating cold studies containing no images failed to update the prepared status because of an erroneous check for the completed condition.
When a study doesn’t exist on a mounted repository and cannot complete a delete request while running in Authoritative mode, the system would attempt to find and delete it on other repository mounts, which was unnecessary because of the Authoritative mode.
The restart buttons for the taskd and DICOM components were disconnected. They’ve been activated to restart the component when its status is either STOPPED or ERROR.
A missed permission check could be used to gain control of the layout edit tool.
When receiving an order message via web services for a study that has already been acquired and the study identifier details are insufficient to identify the study and the order contained a proposed study UID that matched an existing Study UID, the wrong study could be updated.
The user export and import tools failed to recognize a web service account type and imported the user as a regular user account.
Report addenda submitted from a third-party device using the web services interface could be tagged as a primary report in the response message.
Addendum text submitted from a web services client was stored in the wrong container, hiding the text when displaying the addendum.
The recent enhancement reducing the time needed to stop tasks failed to apply to some slow processes. These include checking or starting tasks from certain queues. These now recognize stop requests and terminate quickly.
The taskd and DICOM restart tools are inactive, the buttons on the server information page are grayed out to convey the feature is unavailable.
If a saved list is assigned to an action and the account of the list owner is deleted from the system, the action would still attempt to run but it would report errors (and not complete). An additional notification is displayed when deleting a user account whose saved lists are linked to enabled actions.
Some RGB, YBR, MONOCHROME1 and MONOCHROME2 images failed to compress using JPEG 2000 because the DCMTK routines referenced some unnecessary yet uninitialized fields, causing the process to crash.
When the repository root is the only mount point, the repository handler always returned the first hash directory rather than an empty string.
An unnecessary lock request could occur when the repository handler issued calls to get a file location or create a directory with a specific combination of parameters.
If a user group contains a permission the server does not recognize, importing it will fail.
When displaying images is the web viewer’s series view, left-click-drag attempted to drag the series as though it was loading from the thumbnail panel, even when the thumbnail panel was hidden.
A retired function could be hijacked to upload dangerous content.
A fix to handle database retries resulted in the task page reporting zero failed and suspended tasks after starting taskd.
Excessive, unnecessary license checks occurred when loading the system information page.
Some exposed parameter values could be used to collect sensitive data.
Users with certain permissions could get access to tools to merge studies.
Media creation might fail when partial studies are selecting, the PbR is explicitly selected and reports are included because duplicate items can appear, causing an exception.
A search used by the monitoring tools would stop when it encountered binary data, such as a special or accented character, truncating the results displayed on the monitor page.
An additional callback hook message format, XML2, exists to allow XML files containing COLID values that start with a numeric character.
When a study was opened while the stream server was still fetching blobs from the cache, the opening was delayed until the previous download completed.
During high loads on a system with many mount points, the java VM can crash due to a bug in the standard libraries. The affected library function has been replaced with a safe one.
To assure multifactor authentication confirmation codes get sent to a single user, the system prevents users from entering duplicate phone numbers into an account profile. Attempts to edit existing accounts sharing duplicate phone numbers will also fail.
When multiple threads were fetching blobs, a drain on CPU resources could spike CPU utilization and slow down the overall download.
Loading the presentation state details in the web viewer conflicted with processing saved annotations, resulting in missing Hounsfield annotation values.
It was possible for users to create and save worklists without the necessary permissions. A permission check was added.
A race condition existed causing simultaneous login requests to bypass the failed login processing and mistakenly grant access to the system.
Collecting message targets returned all users even when the selected target user was the Admin group.
The checksum used to determine whether a user’s profile settings changed included some server settings, causing unnecessary updates.
When using the Edge browser, the autofill function incorrectly offered to insert the entire name into a name component.
The default MySQL character set changed after upgrading to MySQL v8.0 from Latin1 to UTF8. The default change affected client applications as well. This caused problems when upgrading existing PACS systems whose databases were configured to use Latin1 character sets.
The mechanism used to prevent creating a report or dictation in an order was dropped when the patient folder was overhauled. The tools are enabled only when the study is in the Completed or a later state.
The user name displayed in the lock notification panel was garbled if it contained an apostrophe because the string was improperly decoded.
If late correction processing encountered an error, such as failure to lock the study, it terminated the correction for the remaining studies. Now the error is logged and processing continues with the next order.
Acquiring the same object on two different registration servers at the same time could result in mismatched blob and acquisition date-time data. More analysis is applied to optimize the acquisition, registration and processing sequences.
The repository handle callback mechanism can initiate java commands. When the java vm is unavailable, these callback routines won’t be created. The interface has been updated to ensure these commands are executed.
Opening a frozen or cold study could result in duplicate reheating tasks because the status update could be performed after the reheat process initiated.
If multiple errors occurred when servicing an SDK request, the stream server might not have returned any error because the software exited the loop prematurely.
When stopping checkOverload tasks with a kill command, including when using rc to stop the system, the task will run to completion and terminate gracefully on all farm servers, including the Application server.
The farm validator failed to identify a external database server because the macros used resolved to localhost.
The external database server was dropped from the configuration once the registration servers were started because the configuration file was not updated after changes were made.
The control status for MySQL would return an error state when the cloud native SQL database was configured into the farm because the grants might include unsupported wildcard characters.
The farm validator was updated to check IP addresses match server names but when private and public IPs are used, nslookup could return an external IP and hostname could return an internal IP, resulting in a false error.
Some users who are not intended to have locking privileges were able to lock studies when opening them because a permission was unintentionally added to the list of permissions that allowed study locking.
The server information page failed to load in a farm configured with an external database.
Going to the Server Information/Monitor page directly from the dashboard failed to initialize the dashlet’s date fields.
When running java commands without the -f parameter and taskd is in the process of stopping, the return code was OK even though the command didn’t execute.
Attempts to delete a study from the GUI could fail and never retry if the system could not establish a lock.
The pb-scp process was issuing bad (unbounded) queries when determining the study’s priority.
The repository handler failed to pick up on-the-fly changes because two recent performance enhancements the limited the frequency of checking for configuration changes.
System-level forward actions failed to run because system users were managed differently on registration servers. The login process failed to recognize the built-in system account and as a result, a session could not be started to process the action.
Before establishing a new communication channel, herelod checks to see if it exists to avoid using an allocated channel.
When the thumbnail image contain the entire image, the stream server handled the request for no data and failed to respond, resulting in a deadlock condition.
The startup processing failed to initiate the local cache repository because the check for it occurred in the wrong sequence, and it was part of a run-once operation, meaning it wouldn’t try again after the first attempt.
The viewer’s patient folder would display PDF files on single-server systems but not on farm deployments because the PDF file gets generated on the registration servers and there was no RPC call from the application server to generate it.
The control status for MySQL running as a remote database returned an error status when multiple instances were updating the same temporary table and one was inserting a row while the other was dropping the table. When this happens, Dr Watson would attempt to restart the local database, causing dropped connections and performance delays.
This issue affects servers running platform Rocky 8.10 only. Some log files could disappear after log rotation because a missing signal to rsyslog wasn’t issued and the new log files were not created.
Three problems. First, if a PbR file is sent to an existing study on a service in store mode (i.e., processing is disabled), the study transitioned to cold causing the study to be processed. Second, on a backed-up server, an activity manager’s timer expires with some dcreg tasks in the queue causing the server to mark the study as partial, which caused the study to be reprocessed. Third, a quick fix to execute prep study tasks before the prep mode was set caused a full reheat.
The identification of priority studies failed when processing a newly arrived object. As a result, medium and low priority studies could be elevated to high priority status.
Prior studies in an error state (after processing or registering) and no PbR object could cause the patient folder to crash when it attempts to collect the notes. When the error condition exists, a red icon appears in the study’s entry in the patient folder study list to indicate Notes are unavailable until the error is cleared.
Failure to use the updated interface when searching for studies to purge resulted in repeated checks and space calculation errors.
Removed false positives when monitoring latency data. One resulted because the log entry contained a wrong module name or a module name containing spaces. Another occurred when the object failed to process on the first attempt and then succeeded to process on a different registration server.
When displaying thumbnail images on the Tech View page, an indexing mismatch between the back end and the repo handler caused the system to check the wrong repository, resulting in empty thumbnails.
Failure to flush taskd updates prior to stopping taskd, even gracefully, could cause bad states when restarting tasks. In some cases, this could result in a hung task.
The new quick delete mechanism resulted in auto (nightly) purge failures, specifically when configurated to purge objects and keeping reports when the studies contain no report.
Stopping DICOM components could terminate pb-scp processes while the study is locked, resulting in a stuck lock. Now the process waits for the repository handler to finish and release the lock before exiting.
Users without admin permissions could see saved lists from all users because the access filter was not applied after making an unrelated change.
The stream server failed to check for null pointer values during processing resulting in a crash and, in some cases, an abandoned study lock.
When a client application issues a batch open command and the primary study in the pbs file does not exist on the server, the viewer fails the open request and returns an error to the client.
When the temp meta data repository is configured as a symlink and user operation, such as a merge or delete, obsoletes a study, the source studies did not exist in the study cleanup page because the system failed to follow the symlink.
On a dedicates stream server, the nightly log rotation task would be terminated prematurely because the PID used to receive a HUP signal was applied to the parent process’ process group instead of the specific process.
When a blob got into a corrupt state, the study was left in cold status rather than indicating the error, making it difficult to track down the cause of the corruption and causing the system to reprocess it repeatedly (with no better results.)
When a collapsible task is in the retry, delay or future queue when the user deletes it using the study cleanup tool (from the GUI), all other tasks associated with the study get collapsed with it, preventing them from running.
When the stream server encounters a corrupt blob, it might crash because it failed to handle invalid buffer boundaries. Additional parameter checking was added so the stream server fails gracefully.
Stream server command containing no study key were not handled correctly, putting the stream server in an invalid state that failed to respond to further commands.
The visible state of the web viewer’s thumbnail panel should be hidden but it was set to show.
Opening a study with at least one presentation state could fail when running in replica mode because the system attempted to create a new repository folder rather than use an existing one.
Using the GUI’s study cleanup tool might fail to remove running task files from the database.
When an image is small and can be streamed in a single packet, the stream server might think there is no data to send and doesn’t send the data.
The server failed to provide the necessary key parameter to the stream server, resulting in no image download to the web viewer.
Enhancements to improve blob creation resulted in file checks on NFS storage, which proved to be suboptimal. The file system check has changed to remove the inefficiency.
A PrepStudy message might not be issued when processing fails, causing partially cooked studies to remain in a finalizing state.
In a wormhole configuration, auto forwards were ignored because the data is acquired through the storestate.rec file and not a device-initiated acquisition. Now, the replica server replaces the auto forward devices in the storestate.rec file with devices configured in the Replica system.
A variable overrun occurred when the blob size exceeded 2GBs.
The cleanup process was not designed to run multiple instances and when the process took longer than a day to complete, or a user manually initiated multiple instances, unexpected results can happen. The process is now locked. Attempts to run multiple instances are detected, terminated and logged.
The nightly cleanup process periodically checks to see if taskd has stopped and if so, terminates gracefully.
After adding support for AND filters, the query qualifier was improperly applied to name filters, resulting in invalid warnings about loose search criteria.
When attempting to stop taskd, the postThread could go idle and not exit, hanging the stop request.
The prep study tasks were running the finalization mode processing on each server that processed an object in the study. These have been consolidated and the finalization processing is performed by the server that had the last activity.
When the system performs a daily purge and there are forward requests to downed devices in the queue, the query failed to identify which objects need to be purged and as a result cancelled the cleanup tasks.
When checking to see if an acquire object matches an existing object and the existing object does not contain the private attribute tags, the process can crash and the object will be reprocessed whether it needed to be or not.
The repository handler class initialized the dirty handling module in the constructor which caused unnecessary locking and delays.
When a registration event was recorded on a study already in the finalizing state, the system erroneously initiated a full reprocess even if processing mode was disabled.
A misplaced state assignment could result in parallel herpa threads, causing some objects to be dropped from the blob.
DICOM date-time field types are incompatible with SQL-formatted date fields and require conversion when used to apply relevancy rules.
When an admin changes the relative priority of a task resulting in a change affecting a delay task, the delay task might not get picked up until taskd was restarted.
Exception handling in storescpreg cleanup tasks was performed by the wrong exception handler, removing activity tasks from the activity manager.
In a server farm configuration, cold thumbnail images were not displayed because the processed repository was not accessible to the stream server, which delivers the thumbnails to the web page. When needed, the stream server now instructs the app server to move thumbnail image files to the cache repository, which the stream server can reach.
The quick delete mechanism didn’t handle NFS mounts well due to unnecessary checks for objects and their references. This resulted in performance issues, especially when purging large studies.
When many subdirectories and subdirectory structures exists, the optimized directory cleanup tool can experience resource starvation leading to deadlock. Cooperative multitasking has been employed to manage delete task threads.
The recent change to copy a group of overlay fields in the profile configuration file dropped all but one of the multiple fields.
When a stream server has been running for a long time and has many open file handles, runjavacmd could crash due to a buffer overflow if the file descriptor is larger than 1023.
A task can enter a race condition when modifying its relative priority, particularly when the queue contains tasks scheduled to run in the future, causing prepstudy tasks to collapse into a task that will never execute.
The web viewer failed to load images if the study was in a partially cooked state because it was only checking the cooked status, which was false, and ignoring the partial status.
In a farm system in which existing objects get resent, final PrepStudy task might not run, leaving the study without herpa data which would be generated on-the-fly when needed.
Herelod failed to recognize I/O errors properly, causing it to crash when one occurs.
If transaction handling is enabled and a web services message is missing all the context fields, the check for the message ID would generate an error, causing the transaction processing to unnecessarily fail.
StorescpReg tasks created by study acquisition may end up in different taskd jobs
Starting registration tasks during study acquisition could end up with related tasks assigned to separate jobs, subjecting them to be processed independently, causing delays and complications.
IP validation on devices page only done at server side
The data validation routine on the web page failed to check the hostname field for non-alphanumeric characters when configuring a device’s hostname, resulting in the creation of an invalid device entry and only a log entry in the error log.
Reports received via WS do not always save to orders
Order field values using an internal VR do not get processed correctly when converted to zero-image studies, causing the order to be dropped but returning a success status.
Action runs twice for the same study
When actions are running slowly and multiple action requests overlap, completed actions could be logged before the other instance queues up the action for a given study. As a result, the action could be performed on the same study multiple times.
Edit action retries when the coercion is empty
If the coercion rules for an edit action were empty, the action returned a failure status and the study was not marked as processed. As a result, the study was retried in the next cycle.
Inconsistent functionality across servers on tasks page
When taskd stopped on the registration server, the tasks page indicated taskd was functioning correctly.
On the Device Edit page, the Task Limits "Dicom Out" and "HTTP Out" should appear
The task limit settings for DICOM Out and HTTP Out were missing from the device’s edit page. They have been restored.
Exception in CallbackTask and on Logs page
If the logs page included fields from both the report and study table, the join operation failed because there was no common join field.
File upload servlet should verify user session
The file upload servlet validates session credentials before uploading the document.
No monitor data on remote server of new farm
When adding a server to a farm, a directory used to share monitoring data wasn’t created and the data was unavailable.
Worklist export csv file unique ID part can be empty
Exported worklists create a UID for use in the file name. That UID depended on an ID created by the browser, which could be empty. In such a case, a vulnerability existed that could result in the file being downloaded without proper session credentials. The dependency on the ID has been eliminated.
Remaining reference to window after closing diagnostic web viewer
Failure to release the window after closing a browser running the web viewer (in iOS, specifically), prevented the user from launching multiple web viewer browser windows.
Authoritative mode repository still needs to call low level getLocation for dirty resources
When a repository handler is in authoritative mode, it would not import dirty resources into the database via the wormhole.
Exception page contains too much information
A potential vulnerability has been resolved in which details about the internal operations of the software could be exposed.
Zero size blob entries not handled correctly
UPGRADE NOTICE: Corrupted blobs may exist prior to version 9.0.12. These studies must be recooked.
Zero sized entries in the full quality blob corrupted the blob, meaning some non-zero entries were dropped from the blob. Affected studies might appear blurry or report missing images.
Thumbnails not loading in split study view
The split study view page was not connecting to the stream server and the images failed to appear on the page.
Disk full does not stop acquisitions and allows some partitions to fill up
The disk full flag was not shared so when a repository was filled by one server, the others were unaware and the acquisition processing failed to stop.
Studies going into partial when migrating with cache/process disabled
With the migration setting disabled, meaning acquired studies are not cooked, and images for a study arrive in batches such that the first group are processed before the second group starts arriving, the second group of images ignores the migration setting and reheats the first group. Additionally, objects without dates were treated as new objects causing unnecessary reheats. As a result, the study is in a partial state and requires manual reheating to correct.
Processes can crash when multiple threads try to initialize a repository simultaneously
The repository handler could crash when one thread was initializing a repository object while another thread was deleting the repository as it prepared a new one.
Stream server secure port weak cypher
Weak cyphers have been disabled in the stream server.
Key images missing or not rendered correctly
Extracting images from non-redundant blobs, as in the case of key images, failed to detect the end of the blob, resulting in invalid images.
Initializing the repository handler still can crash
Failure to lock the cache map with adding repositories could cause the repository handler to crash.
Authoritative mode needs to avoid full repo scans even for repo status checks
Acquisition and worklist performance were negatively impacted by an unnecessary repository scan, particularly on systems with numerous repositories.
Frequent wormhole messages for the DB Repository wrapper may cause bad data
A race condition present when in wormhole mode and the origin server forwards many studies might lead to an uninitialized repository location record, causing taskd to crash.
testLocal.jsp is vulnerable
An internal tool to test IP addresses encodes the data to eliminate cross site scripting vulnerabilities.
Input fields vulnerable to HTML injection
Some input fields vulnerable to HTML injection are sanitized and checked before acceptance.
Injection: reflected cross-site scripting
An action validation tool encodes the input data to eliminate cross site scripting vulnerabilities.
Broken Access Control: user can add other users registered email address
Email addresses are checked before saving to eliminate broken access control vulnerabilities.
Streamserver stops sending
A race condition when calculating the number of packets sent and acknowledged left the counter out-of-sync and the transfer to stop.
Checking the server load (niceme) is broken, always returns cached values
Checking the server’s metrics returned a cached value. As a result, the state that existed when the check was first run got returned. This might cause the wormhole mode to never start sending studies or stop when the Replica server got busy.
Monitoring triggers heap dump capture
When collecting monitoring data, the disk mount checker triggered full heap dumps under specific partition configurations.
WH delete request on a study whose meta has not ever been initialized runs into errors
When a study was imported to the Replica server but purged by the Origin server before the Replica accesses it, the Replica server triggered a full scan and attempted to convert the meta data.
Do not set dirty when Order (Study) is moved on another server
The Replica server needs to ignore wormhole management messages sent from Origin for unmanaged resources (studies, orders) or for resources that don’t reside on the reported repository.
Study reheats occasionally go to retry
The timeout for reheating a study was too short when the farm consists of many registration servers, causing reheats to go to retry, delaying the availability of the data (blob).
Failed to identify primary row error should not terminate the action
When an order gets deleted, including when it is converted into a study, an action that depended on the order, such as identifying priors for processing, would fail because the order is gone. This caused the entire action to terminate. As a result, unprocessed studies fell to the next action event.
Stream server jit cooked studies stuck in preparing state
The stream server failed to recognize a transitional processing state resulting in an error that halted the data transfer/update to the viewer. As a result, the study status in the viewer remained in the processing state even after the processing was done.
EPObjCache entry and reference remains for a deleted order at the Replica
After deleting an order from an Origin server, the Replica server leaves a trace of it in the (object cache) database. The system later tried to remove the non-existent object, resulting in an error log entry.
Order correction broken
When collecting data for an order, the server queried every field which generated numerous errors and the operation might eventually fail.
Web viewer is not displaying presentation state (GSP)
The web viewer was looking for presentation state objects in the processed data but they had moved to the blob data. As a result, the web viewer failed to apply presentation state objects to images.
Checkoverload deletes too many studies when isClearableMove="false"
When the cache repository contains multiple mounts and one of those repositories exceeds the move limit, checkoverload unnecessarily cleared data from the other repositories and cleared more data from affected repository than needed to satisfy the clean limit.
Reheat omits non-image objects from herpa
Reheating failed to include non-image objects meaning presentation states, CAD results, spatial registrations, and other objects were missing from the herpa data and not sent to the viewer.
Multi-modality studies incorrectly encoded
The modality value stored in the database included an encoded separator that was incorrectly treated as a NULL value. As a result, searches on an empty modality value matched these studies.
Full scans when deleting studies/orders via wormhole
When the Origin server deletes an order, the Replica server receives the same command. The commands produce codes that do not map the objmeta data to the database. Some of these codes initiate an unnecessary scan of the DICOM repositories.
User Viewer Profile Toolbar copy not working
Toolbars and some other XML branches in the user profile file weren’t managed like other data resulting in an invalid XML structure and failure to recognize the profile settings. The default viewer configuration file has also been updated.
Web viewer only shows first PS saved/uploaded on an image
Multiple presentation states were tagged with the same name value. As a result, the web viewer considered them the same view and only rendered the first instance.
Broken Access Control: Low-privileged user can list all usernames in the system
An unused function could be used to list all user accounts because access was not restricted.
Broken Access Control: Low-privilege user can use server settings page functionalities
Updates submitted using the acknowledgement notice and banner update pages could be used to access system settings because permissions were not checked.
Web viewer - Scrolling error in info.log
Using Firefox and the mouse wheel to scroll through a series in the web viewer logged an exception because of an unsupported environment variable.
False herpa/blob fetch error in stream server
A race condition when checking the blob/herpa object’s status code and validity could result in a download failure.
Web server settings cannot be saved when web viewer selection is not available
Web viewer settings could not be modified if the web viewer type setting was not available as an option.
prepstudy tasks always finalizing on all server
Prepstudy’s final stage should be performed only after the last prepstudy task was completed and only by the server performing the task.
Web viewer - Hounsfield annotations imported from PS state not displaying properly
The web viewer would apply the stored annotations before the image decompression completed, causing an error when calculating the area values.
Encapsulated pdf objects not handled correctly in reheat
Reheating a study containing at least one encapsulated PDF object failed, leaving the study in a cold state, because these objects were not included in the enumerated list of reheatable objects.
Apache/Tomcat crash by to repohandler initialization
While the repository map is locked, the server attempted to perform a write operation that inserted an empty element into the map, causing Tomcat to crash when searching for a repository.
Media creation excludes image objects from the DICOMDIR when using JP2K compression
The DCMTK toolkit created a separate command for adding JPEG 2K files to the DICOMDIR. The files were copied onto the media but their reference was missing from DICOMDIR.
Monitor Time of WL page download value is incorrect
The Time of WL page download value displayed on the monitor page did not display the expected value because the system no longer loaded the page used to calculate the load time. It now loads the system default worklist page.
There can be lots of sleeping task-threads in taskd
When the system needed to start a task it didn’t always check to see if it was already running but in a sleep state, causing the system to instantiate unnecessary tasks threads.
Studies can be locked in the repository handler for 30 sec - 10 minutes
The study resource could not be locked when many tasks are operating on a study, resulting in a ten-minute timeout and causing unnecessary delays when attempting to process a study.
Some studies become cold or partial during processing
The processing mode entry in the study activity table is cleared by one server just before prep study on another server reads it, resulting in a return code indicating no processing is needed and the object remains uncooked.
Java debug not always shortcut when turned off
Some debug processing in the Helper class didn’t check the debug flag and ran when debug was disabled.
Page downloading monitor metric should only be collected on web role servers
Collecting page downloading metrics was slow because the system was collecting data from all farm servers, including those with no web server component running.
Odd size empty image causes herelo to crash
When compressing an image, if the requested image size is smaller than the minimum compression size, which can happen when the image matrix is very small, herelod could crash.
2x2 images don't decompress to lossless original in gwav4
Very small images result in overlapping filter vectors and fail to compress. As a result, images smaller than 4x4 are not compressed and the system uses the uncompressed data instead.
MultiframeSingleBitSecondaryCaptureImageStorage objects fail to process
Images defined as one-bit packed pixels failed to process because the raw image object was not created before the first pixel sample was added.
WS getFrame doesn't generate ima for key images
When the key image is requested from a web services call, the server didn’t call the routine that generated the annotations. As a result, the key image would not display the annotations.
Ignore Wormhole move requests for "not in DB" studies/orders
The repo manager ignores wormhole requests to move a resource if the resource is not managed.
System startup takes long with many (tens of thousands) users
When the system started, it scanned all user accounts looking for legacy settings needing conversion, delaying start-up, especially on systems with many user accounts. Now, user accounts are scanned only if a scan has not been completed.
Cooking report-only study fails
UPGRADE NOTICE: Zero-image studies with reports that were cooked prior to installing this fix need to be reprocessed.
Manually reheating a cold study containing no objects that make up the blob, such as a study containing just a report object, could get stuck waiting for a blob that will never exist. As a result, the study won’t reach the cooked state.
PbR missing from herpa
The change that removed unnecessary file checks before collecting herpa data from the blobs didn’t account for the fact that PbR files are not stored in blobs. As a result, PbR files were not included in the herpa data.
Reheat action's Reprocess option broken
The reheat action’s delete mode didn’t generate the full cache and processed data because it failed to set the study’s preparing state to frozen. This change also sends the storescpreg task to the load balanced for more distributed processing.
Objects can get lost if Dcreg tasks cannot be started
Missing temporary objects were treated the same, whether it was the result of a retry or not. If there was no retry, the system assumes the missing object is a resend. If there is a retry, the system assumes the object was moved to the study directory and continues to register the object.
Media role server fails to create report PDF for media
When the media role was assigned to a dedicated server that does not have apache running, media creation failed to generate report PDF files. Now, media servers request the report PDF file from the Application server.
Creating DICOM Media via EPWS fails in a farm
Creating media in response to a web services request to a dedicated media (roll) server failed because the media server doesn’t accept the session ID offered by the service request.
Check MySQL connection on Media and Forward servers in Farm Validator
Checking the local database connections use the local MySQL and remote database connections use the remote MySQL was applied on Application and Registration servers but missing on Forward and MCS servers.
DetectMysqlVersion error message
The pattern used to collect the MySQL version number changed with the update to v8.0 but was not updated.
When registering servers through the hyperdirector, the application server marked the IP it used to make the registration call as primary but did not clear the designation from the IPs from the host configuration information it received from the server, resulting in multiple IPs listed as the primary IP.
The web viewer’s frame of reference index, used to indicate series sharing the same physical space, was misappropriately applied across study boundaries, erroneously linking series from independent studies to each other.
After launching the web viewer with a study having fewer series than in the previous web viewer session, the series view would appear to shake because a scroll setting was not reset correctly.
If a user attempted to delete a series or object from the Technologist page and the study was locked by another user, the delete failed because it was double-locked.
When a user edits a user preference, the log entry failed to record the action type indicating it was a user preference edit.
When a legacy user profile setting is found, it is automatically converted to a new value, eliminating the need to notify the user.
The list of SOP classes a device supports is empty when the source device used the built-in default. When imported, the script interpreted the empty list to mean no SOP classes were supported. Now it recognizes an empty list to mean all default SOP classes.
The worklist filter label field could miscalculate the width of the field, causing labels to appear outside the field limits.
If the user cancels a worklist query that triggers the query qualifier, the resulting table contained no column headers.
The purge action’s object list was updated to add missing SOP classes. The list is configurable by editing ~/var/dicom/storageSOPClasses.lst and restarting apache.
The user preferences dashboard settings listed the Statistics dashlet types as a Messages type.
When an image’s pixel spacing was negative, the system overrode the value with a fixed distance which caused some images to appear compressed. Using the absolute value of the defined value provides a usable distance without causing presentation anomalies.
If the query qualifier suspended a worklist search and the worklist remained active long enough to trigger a worklist refresh, the worklist refresh ignored the suspension state and performed the query without the query qualifier check.
When taskd is stopping, meaning it received the command to terminate but has not yet stopped, new tasks were created. Running tasks that create new tasks now put the task into the (unprocessed) database.
Series and object level auto forwards had was inefficient. If sending updates was applied to the forward, tasks were created to check the entire study for resends, not just the affected series or objects.
If a study is deleted from the GUI, including as a result of applying the study cleanup tool, and there are multiple references to the study, possibly as a result of a failed synchronization message or send request, the duplicate reference would not be deleted, causing new tasks to be collapsed and disappear rather than get cleaned up.
When the user attempted to edit a study’s Status field but selected no value from the list, and then selected another field to edit before clicking the Save button, a java exception notification appeared on the user interface and the edit failed.
If a merge is started but incomplete when the server receives a web services command to edit a study (PbR file, specifically) while the queues are backed up and there are late objects that need to be moved, the server could detect the obsoletion time stamps in the PbR while processing the edit and delete (obsolete) the study.
Based on timing, it is possible the multihub cleanup tool could create and leave temporary files on the target server that might end up in the object table. Note that v9 does not support the multihub cleanup tool – there are no hub servers – but this change exists in common code so it is applied here as well.
The patient folder XML page definitions do not support external reports correctly so if an external report macro is added, the patient folder’s report page will display an exception when attempting to render an external SR report. The restriction causing the exception has been removed.
The list management tools were reviewed and optimized to improve lock management and avoid deadlock conditions. The system locks lists on an individual basis and avoids using global locks.
When using a Chrome or Edge browser and secure HTTP in the URL when the server is configured to force unsecure HTTP protocol, redirected viewer requests went into an infinite loop and the study would not load. The solution is to use secure HTTP when declared, regardless of the server’s secure connection setting.
The java VM failed to set the automatic retry flag in the signal handler and when an interrupt occurred when calling a blocking (locking) function while creating a task file, task creation failed rather than get retried.
Attempts to stop taskd’s JVM while the code was inside a loop trying to connect to a non-existing taskd listener caused the termination to hang for up to ten minutes.
When running in authoritative mode and a study exists on multiple mounts, only the data on one of them was detected because the appropriate wrapper was not created in the database when the data was acquired and stored.
When rescheduling tasks from the user interface, the schedule command was not created, causing the collection process to encounter an error collecting the task’s properties.
Double-clicking the merge button when correcting an order to a study caused the function to be called twice. The second time it performed a normal study merge, displaying the study merge page rather than the correction page.
When loading a canned report template containing special characters, the special characters are not recognized and invalid characters appear in their place.
After updating Chrome, attempts to validate the farm configuration using a secure HTTP connection failed to connect to the other servers because the cross-origin embedder policy setting was not defined in the response header.
The service class column for origin and replica devices on the Devices configuration page was empty because the device class changed and these options failed to be extended to it.
A code-level option exists to suppress low-level log entries when locking a database table. Some low-level errors are handled at a high level, making the log entry unnecessary and misleading.
Attempts to export the data from the origin server failed because the migration tool that checks the replicator’s resources encountered an exception when collecting task counts from all servers. Additionally, the collection effort was running twice, unnecessarily. This has been corrected.
Based on timing, it was possible for a database connection to remain open when a worker thread closed, leaking database connections that would eventually prevent access to the database.
When the cache repository is in authoritative mode, stream servers might fail to locate a study in the cache because the repo handler issued a request to the stream server that could not run (because taskd doesn’t run on stream servers), resulting in the failure to download images in the viewers.
Cross-origin headers present in the JSP files caused conflicts now that Apache automatically adds them, resulting in a failure to fully decompress images.
A missing animation trigger cause some annotated values to remain hidden in the web viewer.
The Tasks page added a scroll bar to accommodate takes queues from many (registration) servers. Also when the stream server is configured to run in simple mode, meaning it does not run tasks, it is not present on the Tasks page.
When the farm validator runs, it propagates the role configuration file to all servers, which overwrites custom configurations on some servers. The unnecessary propagation calls have been eliminated and servers are set up to accept propagation calls from the application server, only.
On server farms consisting of many registration servers, database update requests could fail and exceed the maximum retries when attempting to register acquired objects in the same study. The number of retries has been increased and retries are put to sleep at varying intervals to avoid collisions.
The missing thumbnail panel setting has been added to the web viewer section of the user preferences page.
The temporary directory names created when processing objects were not unique, which doesn’t work in a shared storage environment. The temporary directory names now include a unique identifier.
If the image contains no image, meaning all pixel values are the same, the compressed IQ image is too short to split into individual streams, causing the download to fail.
The web viewer would successfully display an image but report exceptions if the window width or center were zero. The values were set to one to avoid the exception.
A function that checks the task queues on all servers running taskd failed on servers running the limited version of taskd, causing some processes, including reheat requests on stream servers and wormhole takeover, to fail.
Per-object access to the processed data (on tier 3 storage) when reheating and registering data resulted in poor performance if the underlying storage technology was slow. The process has been changed to aggregate the data into blobs stored on local cache to reduce tier 3 storage access.
When registering an object and the taskd session cannot be started, the object was moved to the study directory prematurely. When the Dcreg task finally starts, the registration task was sent to retry and when executed, found no data in the temp directory, exited and released the object.
When mapping objmeta file data to the object table fails, removing the references resulted in a memory leak which eventually created an OOM killer that killed taskd.
When deleting studies whose objects (blobs) have been loaded into the object cache and the object cache database table, the system deleted the cache data but failed to remove the database entry.
If an exception occurred while applying an Action to a list of studies, none of the studies would be marked complete because the status was recorded at the end of the process. To avoid reprocessing studies during the next cycle, completion status is recorded immediately after each study is processed.
The study details information for an entry on the Logs page were missing because a change to the internal date type was not applied to some configurations, and some incorrect COLIDs were used.
The MySQL connection can close, causing the java VM to crash, when loading the object table for a study with a large number (multiple thousands) of objects.
When running in data takeover (wormhole) mode, orders created on the Origin server are reproduced on the Replica (v9) server and include the .info file. When no longer in takeover mode, the v9 server does not support .info files. As a result, the server is unable to resolve an order properly. The server now recognizes this state and, if possible, selects one of the orders as the primary order.
The web viewer required a minimum of 16K bytes in the compressed data stream. When the streamed file was smaller than that, decompression failed and no image appeared.
REVERSIBILITY NOTICE: The new blob format (.f.ei4) is incompatible with older versions. If downgraded, processed (blob) data must be purged and regenerated.
The herelod’s compressed stream data failed to apply the initial quality calculation method. For small images, such as MR and CT, this resulted in thumbnail-sized (low resolution) images. When displayed at normal size, they were poor quality images.
If two PbR files exist in a broken study, the system does not attempt to clean it up. It logs the finding and leaves the study in the broken state so the Admin can clean it up manually.
If a duplicate object arrives while the original is part of a collapsible task, and the task fails, the duplicate was not being added to the collapsible task when the task was retried, as intended.
A cross-site scripting vulnerability on the custom logo upload function has been eliminated.
If the patient folder listed a prior study but that study is not available to the user because of an access list restriction, the system reported an error when attempting to display the report in the patient folder. The system is supposed to override the access restrictions on prior studies if the user has the right to view the current study.
The web service’s Report command fails if the request doesn’t contain the conditional Option parameter, or the Option parameter does not contain the conditional UpdateEditable element.
A debug log entry was created but failed to check the exception criteria, resulting in a misleading log entry. When the exception does not indicate an error, the entry is no longer logged.
A cross-site scripting vulnerability on a component of the techview page has been eliminated.
When multiple users submitted a request for a (study) list, the toolbox applied a global lock which resulted in a delayed response. The global lock for this and other toolboxes has been replaced with RW locks.
A performance improvement that updated the web viewer only when the view/image changes eliminated the trigger that activated the layout grid tool, disabling the ability to change the grid layout.
An object processed by a compress action was always re-registered, even when the object did not need to be compressed and therefore did not need to be re-registered.
When deleting a study using the infoCollector tree, for example when using the study cleanup tool, and the affected data does not exist, the system failed to detect the data didn’t exist and reported an exception.
After adding a web page dashlet to a user’s dashboard, an exception occurred because the shared module did not check for the presence of a conditional object.
The free space available on each mounted device was logged every five minutes but the calculator ran every minute. Now the calculator runs every five minutes.
When a person name field was defined to use an enumerated list of names, the system failed to ignore name formatting characters when checking for matches.
An obsolete feature for selecting the web viewer’s grid layout could be activated, leading to erroneous results. The obsolete tool has been removed.
After changing the grid layout to something other than 1-up, the link tool button might be drawn incorrectly because it did not exist in the original layout and was not accounted for in the updated layout.
After restarting taskd, the task is added to the database and when the relative priority is changed to high, the task name gets changed in the database but not the object. As a result, the task remains in the queue.
The options to delete immediately and keep the reports are mutually exclusive, yet they were both active when configuring a purge action. Now, checking the box to keep reports will disable the delete immediate and purge matching object options, and vice versa.
When the patient orientation is invalid, possibly due to rounding errors, it gets reset to the default, but that could cause the web viewer to cross-correlate unrelated images. To avoid this, less significance is given to the individual vector values, and if applicable, the frame of reference is ignored.
Media creation tasks remained in the retry queue after a media session was canceled. The tasks were intended to detect the database records and media directory were removed but encountered an exception when checking the existence of the directory, causing the task to go to retry.
A prefetch action using a compound list that includes a group by union resulted in a database exception because prefetch requests use conflicting group-by directives.
Moving a dashboard from one page to another on the dashboard configuration page was saved only when the dashlet was created. Moving it afterwards appeared to work but the change was not saved.
The application server’s error log contained warning messages about non-applicable components failing to run.
When copying profile settings from one user to another using the GUI’s copy profile tool, some toolbar locations and shortcut tables were not copied completely.
Authoritative mode introduced a new status code that the data repository uses to indicate a missing folder. Some parts of the system failed to recognize the status, resulting in errors rather than exception handling, particularly when creating orders.
If an error occurred when adding a note into the patient folder, the message disappeared after the user acknowledged it but a failure to clean up some data field prevented the user from trying again to enter the note.
Multiple refresh page buttons appeared on some table pages, including the media export page, only one of which performed a page refresh.
Displaying an uploaded PDF file failed on a server farm because the call to extract the document contents, which is performed by a registration server, was issued on the application server.
If the export media table page contains hidden fields, some of which might be included in the default list, the user might receive an error message.
An incorrect character encoding value caused some viewer configuration panel labels to contain invalid text characters.
The ability to set the status when closing the viewer required both report editing and status setting rights. This has been corrected and only the status setting right is required to set the status when terminating a viewer session.
Streaming connection calls returned timeout exceptions instead of disconnection exceptions, which caused the stream server to leak threads. After receiving multiple timeout responses, the stream server checks the connection directly to see if it’s closed, and then releases the associated threads.
Key images failed to appear on web page reports because the processed data did not include the compressed image data.
UPGRADE NOTICE: This change applies to cached data. It requires reprocessing affected data. The web viewer did not consider pixel spacing information from all possible locations, specifically the ultrasound region sequence. Without pixel spacing information, the web viewer displayed measurements in pixels rather than linear units.
The farm validation tool failed to acknowledge repository mounts are read-only when the system is running in takeover mode, thereby generating an invalid warning.
When using an existing user account to create a new one, the system ignored the mandatory field check and could create the new account with missing required fields.
Some secondary capture objects did not contain the orientation data needed to render the object, and the web viewer failed to generate it, resulting in no displayable image.
Attempts to display results on the monitors page for a remote server failed until the remote server runs the monitor page locally.
Action configuration returned unsupported data types that caused the DVCL engine to report an exception.
When a DICOM Q/R SCP returns multiple retrieve AE titles in the response and none of them match devices registered in the system, an exception occurred and the retrieve failed to complete.
When the same object is sent to the server multiple times, a racing conditional could leave the study in a partially cooked state.
If one of the XML entries in the user import file was structurally invalid, the system stopped processing the import. Now it logs and skips the user with the affected record and continues with the next entry.
A new study shows an initial state as frozen rather than cooking before the first registration task completes and the study entry gets created in the database.
The Group Open tool erroneously included orders in the prior study list. These have been suppressed. Note that completed orders can still be opened if it is listed as the primary study in the session.
Dynamic allocation of buffers in web assembly could invalidate previously allocated buffers, resulting in an exception.
Action lists failed to retain the order of studies when new studies were added (including processing subsequent batches) because the date stamp didn’t provide the necessary resolution and processing some data modified the add date value.
When a graph was requested from the server before the server collected any data points, an invalid (large) initial value could be displayed by default.
A synchronization issue occurred when generating the error to a web services command, resulting in an unnecessary log file. The processing order has been corrected, the correct error is reported and no log file is generated.
When running in takeover mode and the user is logged into the Replica server, presentation states fail to get saved on the server because the upload failed to locate the Origin server. Additionally, the request always returned a success status, meaning the user was unaware the operation failed.
If creating DICOM media on a frozen study, the herpa data is unavailable. As a result, the media is created without it and the data is processed when the user opens the viewer and loads the study.
Editing a study’s report multiple times resulted in leaked study references that would not get cleaned up, causing a study’s state to remain in the cooked state even when reheating.
The query qualifier was tripping for result counts far lower than the configured thresholds because the join table failed to include distinct criteria.
Re-sent objects whose registration tasks exited because the objects were unchanged and didn’t trigger a prep study task could prevent the final step of the previous prep study task from running. As a result, cooked studies lacked herpa data.
When checking for orthogonal orientation, a floating-point comparison of near-zero values erroneously reset and removed the localizer lines.
After deleting a series or image from the technologist view page, the images might appear in the main viewer when the study is loaded if the open request is issued before the prep study task completes.
When scrolling an image in the web viewer when the cursor was situated over a thumbnail/cross reference image, scrolling stopped because of a missing mouse event.
Carousel images on the technologist page were missing because the cookie session ID was not passed through to the stream server.
When creating media from studies existing on different hub servers, a valid yet irrelevant exception gets logged indicating a failed attempt to generate an ID for an auto-increment field.
The Apache monitor could fail if it issues a query for a thread name after the thread has already terminated.
The farm validation page didn’t recognize the color scheme setting and presented dark text when using dark mode.
A separate thread is spawned to handle gwav decompression in the web and tech viewers. In some cases, additional threads were spawn unnecessarily and the existing ones were left unmanaged. Additionally, buffers exchanged between the threads were not released correctly, resulting in a memory leak.
The task page limited the amount of task data it could display. When large task queues existed or when consolidated task data from many servers was abundant, the buffer size could be exceeded, resulting in an exception and truncated results.
After adding a custom database field, no localized value exists in the resource file and that generated an error message in the logs.
An error message was mistakenly created in the weekly log when the user changed some preference settings.
Hounsfield annotation was performed on the server using the raw data files. Since v9 eliminated raw files, the tool fails, resulting in an error message in the web viewer. The annotation feature has been refactored to use client-side image data.
If a system message occured when the user has a curtain (popup) panel displayed from the Preferences page, the message content was empty.
The repository handler had debug logging enabled by default. It has been changed to be disabled by default.
The stream server’s packet assembly process could get stuck in a loop when the last entry has multiple entries for the same file and processing for the file fails. The status is not propagated to other processes of the same file, causing these other processes to wait endlessly.
Insufficient checking of a return code permitted some system lists (ie, owned by the @system account) to appear on a non-admins saved filter list.
The warning message about password strength indicated a feature that is no longer supported. The message has been updated to reflect the current solution.
Some tools available on the report page broke when the profile file format changed to XML. These include the field to show the radiologist, to show the transcriptionist and to select the key image size.
Log entries indicating an action was performed on a study included in invalid, fixed-text indication that the study state changed.
Compress action tasks can fail and go onto the failed queue if the study is purged before or while the task is running.
DICOM media requests, from any source, that specified a series or object that was not part of the default – typically, the first – series or object failed because the assigned directory identifier was defined using the default’s ID. Since the default’s series/object was not present, the directory could not be located when building the media file.
A function used to display the results of a search didn’t check the user’s permissions, allowing someone to misappropriately use the URL to access restricted data.
Task page filters on the name fields returned no matches. The filter function on the Tasks page supported simple text filters only. Now they support the more complex name filter as well.
Timed-out database connections in idle stream server threads could result in a (regserver) crash when multiple stream servers start running again.
Unprotected thread handling around database connections could cause system components that use the repo handler, including taskd, apache and regserver, to crash when they run after being idle for a period of time (about eight hours, or longer than the mysql wait timeout period).
When acquiring objects, the study directory was created before the coercion rules were applied. If the coercion rule instructed the system to drop (i.e., not register) the object, an empty study folder could persist. The system now creates the study directory only after it knows it’s going to register the object.
The scope of the dirty flag handler changed when the system started caching repository handler instances. It now has to check whether other threads modified the dirty file. Additionally, an unnecessary smart semaphore lock, created when the dirt flag handler accesses its own cached dirty flag, has been changed to a simple memory lock to avoid a strain on resources.
A cached database connection providing efficient access from the repository handler was not being used by the check overload function.
Processing a wormhole (data takeover) notification to delete a study which does not exist on the Replica server failed because the study’s meta directory didn’t exist. As a result, subsequent notifications from the Origin server could not be sent.
The absence of a default value for the Prepared Study database field resulted in it being assigned NULL for each study registered through the wormhole (data takeover), preventing the column from appearing on the worklist.
DEPENDENCY NOTICE: This fix requires an Origin-side fix. For v7.2, the fix is in 7.2 medley-97. A Replica system received sync messages from multiple hubs when the study was broken (ie, resided on multiple hubs) on an Origin system. One message indicated additional objects exist. The other indicates objects and even the study were deleted. Depending on the order these messages arrived at the Replica, some objects could remain unregistered.
A change to the file name extension of compressed data files was not applied to blob file lookups, causing requests to download JPG images to fail.
The tool used to parse meta data objects ignored empty trailing fields, truncating the data when it was updated. The truncated data caused data import (during takeover) to fail.
When the training fields in the object meta files are empty, the system truncated the data resulting in a failure to read them.
Studies on an origin server with a process mode state set to Store failed to create cache or processed data on the replica server. Since v9 always processes data, the setting is ignored during takeover.
If a group open request includes an order, the viewer loads the studies, including the order, but the images failed to show up because the order contained no blob data, and it halted the streaming of all images.
If a server farm consists of multiple servers but does not include a load balancer, typically because only one server is used to perform each defined role, an uncommon but valid configuration commonly used in validation testing, the intracom client failed to run because it required the presence of a load balancer.
Reheating and reindexing a study during data takeover could find and register temporary files from the study directory, causing duplication of some objects.
Some special characters, including apostrophe and backslash, in text strings were inserted into the database preceded by a backslash. When displayed in the worklist, the extra backspace character appeared.
UPGRADE NOTICE: This fix applies to new data only. Existing studies affected by the bug must be cleared from the object cache table. If the last field in the object table data is empty when the object is swapped out of the meta database, the data was truncated and the subsequent load would be aborted.
When in takeover mode, the replica server failed to create orders the origin server sends over because the replica server, whose storage is read-only, attempted to create the folder.
Modifying a filtered list assigned to an action caused the action to mishandle the current content setting, causing the action to be applied to existing data regardless of the setting.
When in takeover mode, users and the system are unable to create an order on the replica mode because it attempted to create the study repo itself rather than passing the request to the origin server.
When reheat tasks timed out, they were treated as generic registration errors and sent to the failed queue rather than the retry queue.
A mishandled parameter in a call to remove a study from the action processing table prevented studies that no longer match the filter criteria from being removed from the table, consuming resources indefinitely.
The data structure for storing the result of a MySQL query was unbound before executing the query, causing a write to be applied to undefined memory space, resulting in a crash of taskd.
If the studies on an action list do not change between action events, the action fails to execute because the check for an empty array was performed before the list was converted to an array.
Some monitoring tool, specifically Time of SQL Query (s), Memory Usage and Memory Usage Actual, mishandled the input data format, resulting in invalid or no output graphs.
Importing users and groups from v7.2 failed because the group table name changed, empty table checking was missing and the action filter table was missing an ID field.
When applying a (worklist) table filter by dragging a value into the filter criteria area, the COLID could be missing, resulting in an exception and a failed query.
When applying compound lists, it was possible to miss records that satisfied one of the lists but not the other if the second list included criteria that excluded the records on the first list. By handling the query as a union of separate lists, all matching studies are included.
Report submitted from the viewer failed to insert the Study Date and Study Time values since changing the database date/time record format.
A security vulnerability occurring when using the forgotten password feature has been eliminated.
The option to exclude devices in the device import script, importdevices.sh, mistakenly applied to DICOM devices only. Now all devices are checked against the exclusion list.
Some calls to retrieve a repository’s absolute path failed if the repository root itself was a symbolic link.
When the repository root is the same as the repository mount point, temporary files are placed in the repository root directory rather than the tmp directory. When the object was moved to the repository, the system attempted to remove the file from the data tmp directory, instead of the repository root directory, leaving unmanaged files in the data tmp directory.
The initial quality (IQ) images were saved to the processed repository on (slow) tier 3 storage rather than the local cache repository (on fast tier 1 storage).
When a study is processed in parallel on multiple threads, locking controls were inefficient because they took a long time between retries.
An open viewer with an open patient folder panel whose viewer session has timed out issues refresh calls, resulting in exceptions recorded in the error log.
When an SQL exception occurred while searching the database, it could be mishandled and, in some cases, clear an action’s “done” list. The next time the action performed the query successfully, all the studies would get (re)processed.
When creating a new web service device, the default user from an existing web services device would be inserted at the default user of the new device.
Idle stream connections were entering a sleep mode that was not yielding sufficient CPU cycles.
If purging is enabled for an NFS shared drive, makespace() failed to run because the tool used to collect the disk usage data does not work with NFS-mounted devices. The tool now uses the mounted directory instead of the device.
Under some conditions, perhaps mostly when reheating a study, when the system completes multiple tasks within a short period of time, the task counts were not updated correctly and some completed jobs remained visible on the Tasks page.
When cache processing encountered an error, the error was handled correctly but the status was set to Cooked, regardless. Error values are checked now, assuring the status reflects the processing results.
No solution was in place to trigger reheating a cold study after acquiring new objects.
Studies with no images were excluded from the cooking process, even though they require herpa data and empty blobs before a user can open them.
When passing information to the viewer about the next/previous studies on the user’s worklist, the server mishandled zero-image studies. As a result, the viewer might incorrectly disable the next/previous study buttons.
The action filter states in the Other Lists filter panel page have been changed from a text field to a list of enumerated values.
Some toolbox functions call themselves redundantly, leading to a possible global locking issue. These locks have been changed to local locks to eliminate the possibility of a lock up.
The number of connections a stream server supports has been increased to 8192. Note that each connection requires two threads, making 4096 the maximum number of simultaneous user connections.
None of the viewer files, including the viewer executable itself, were copied to DICOM media because after moving the media option settings to the database, the setting values were not converted properly to Boolean values and therefore misinterpreted when creating the media.
The access key inserted into the PBS file (to support stream server session authentication) was put in the wrong location, causing the viewer to misinterpret the study list when initiating a new session.
When configured as a server farm, the stream server and application server are separate and the session ID managed by the application server is unavailable to streaming connections. As a result, web viewer access from a stream server could not be authenticated until this change, which passes the session ID in the streaming protocol.
The preferable web assembly code failed to load because the web viewer interface was missing a MIME type definition, forcing the web viewer to fall back to a sub-optimal technology.
Redundant and time-consuming calls to obtain an object’s repository location were removed because the study location doesn’t change.
Herpa creation tasks preparing a study for cooking recursively locked the cache repository, causing timeout delays.
Tasks that restore an object table record from the meta data could crash deep within JNI when invoking a gRPC client in JNI after database operations have also been performed in JNI. To avoid the situation, object cache mapping has been reimplemented in C/C++ to avoid invoking gRPC from JNI.
The inclusion of an unnecessary session ID when calling the PDF creation tool cause the conversion script into an infinite loop and ultimate failure when creating DICOM media.
The inconsistent order of locking and unlocking of two different locks when reprocessing a study’s data could cause the task manager to become deadlocked.
After increasing the default MySQL connection limits (see HPS-371, released in this build), it was determined a single default is not sufficient. A better connection limit default for the system was the original 4, so this setting has been restored. Default limits for Tomcat and Hermes are now set to 32. Also, the connection pool now creates connections as needed meaning none are initialized by default. Default limits for other java VMs can be defined using the override file ~/var/conf/modules.xml. See Jira for a list of affected java VMs and configuration details.
The pattern used for matching MySQL’s version number changed in the current version, resulting in invalid error messages in the log file.
Study row selection on the worklist could become inconsistent, resulting in misapplication of a batch tool.
Merging two or more studies into a new study which is then merged with a different study, followed by a delete request could leave invalid state data in the database due to a missing lock when processing the merge and delete requests, preventing the registration of the original studies if resent.
While the load balance server doesn’t use the remote database or a local database, it does generate logging data and that data is logged in the global database. As a result, the load balancer server requires the mysql component.
Media import had not been updated to support the server farm roles, attempting to upload the data to the application server for processing. This feature has been updated to upload the media data to the shared temporary repository and the command to perform the import is submitted to the registration server.
Exported worklists could be downloaded without an active user session if the user manually constructed the applicable URL in a browser window.
The updated DCMTK toolkit changed its behavior processing the sample per pixels value defined in YBR_FULL_422 multi-frame objects, resulting in an error calculating the full image size. A workaround has been applied that intercepts affected image objects and calculates the full image size as defined by the object.
Object level log entries were incorrectly included in the log database. This has been corrected so they appear in the forever logs only.
The load balancer server’s configuration used hostnames rather than IP addresses, which won’t work at sites which are not set up to resolve FQDN. The generator script now uses IP addresses when available and falls back on hostnames when not.
User-initiated study delete requests could cause taskd to lock up when a deleted task attempts to add a new cleanup task.
A recent bug fix prevented a RIS user from opening Completed orders in the viewer or web viewer. Support for this behavior has been restored.
When installing a server from scratch, the hyperdirector RPM is pulled in as a dependency but isn’t started, causing a failure during startup since it is expected to be running.
Failure to pick up a modified environment variable before starting the hyperdirector caused the server validator to fail.
Changes applied to user session management within a server farm were not applied to the performance monitor page, resulting in an exception.
The spinner graphic displayed in the terminal window when running the startup script dumped multiple newlines on the screen because the animated character required multibyte character set support, which wasn’t applied by default. The character has been replaced with three dots to indicate the task is in process.
When the user changes some settings, a session refresh updates those settings so they take effect immediately. Changing the assigned viewer was missing from this list of settings. As a result, changes to the applicable viewer didn’t occur until the user initiated a new web session.
The Move Left button on the group member edit page was placed at the midpoint in the group list. On systems with many groups, this placed the button off the initial screen. The button has been moved to the top of the list.
When editing a notification action assigned to a system list, the target email list appeared blank rather than listing the notification recipients.
If no default document type was assigned to the server, the attachment upload GUI did not filter the other settings on the page, allowing users to assign unsupported combinations of settings and causing some uploads to fail.
A recent change to display reports in the worklist patient folder was applied too broadly, affecting old style indexing used by the viewer’s patient folder, making external reports unavailable from the viewer's patient folder window.
Changes in the DCMTK toolkit allowed the system to generate UIDs longer than the maximum field size. The algorithm for generating UIDs has been modified so all UIDs are unique and within the permitted length.
The repository handler uses database locking but when it cannot connect to the database, it results in an uncaught exception that screws up the locking mechanism.
When reprocessing a study containing no processed repository, the jit processing routine erroneously created legacy thumbnail images.
The system default calculated fields for the Corrected state (p0000) and Report Exist state (p0002) failed to appear in the configuration page, could not be modified and were unavailable as a worklist column. The data field types changed but the new types were not handled by the database.
While an empty email address is optional in some notification email configurations, it is required when the list owner is the system account. In these cases, users are prevented from activating the action until an email address is provided.
A client side exception occurred when the user logged out immediately after logging in, before the worklist could display.
When a worklist refresh occurs (manual or automatic) while editing a report in the patient folder on a worklist on which a study disappears, reordering the worklist rows refreshed the report edit page as well, clearing report data that might have been entered.
The user account lock status and the login details reported incorrect information when the user selected the account by checking the selection check box at the beginning of the row. The information was also inconsistent when multiple accounts were selected.
When loading a dashlet, an exception could occur after login due to failure to check for an initialized variable.
When a user-initiated task-related action, such as changing the priority of a scheduled task, incurred an error, the return code was mischaracterized, leaving a lock in place. As a result, new tasks would not run.
Autocorrecting studies to orders using patient name as matching criteria and a patient name containing an apostrophe resulted in a search exception and a failure to autocorrect.
The Partially Inaccessible indicator on the worklist could report an incorrect state if the user clicked the More button to display additional studies while the system was still acquiring them. The call to set the state failed because the page did not handle the request.
The startup script returned the global result variable rather than the local result variable after starting each service on each server in a server farm, resulting in a success status code even when one or more servers failed to start.
Global restrictions were unintentionally blocking non-study related data from the Logs page.
When upgrading from v7.2 to v9, worklist filter lists whose names are too long to fit in the database are skipped, with a warning displayed on the console window. A problem in this handling caused the subsequent list to be ignored, causing the upgrade to drop it as well, without any warning.
The hyperlink in a notification email that launches the images in the web viewer was missing the login prompt. If a browser did not have a valid session cookie already, the web viewer failed to load images.
REVERSIBILITY NOTICE: This change requires regenerating all processed and cached data to a format which is incompatible with previous software versions.
DEPENDENCY NOTICE: This change requires viewer 9.0.4.4 or later.
Under very specific and unlikely conditions, the compression algorithm could encounter a matrix boundary condition that caused the compression effort to fail, resulting in no processed image.
The algorithm for resizing images to fit them in the available web viewer frame might terminate prematurely for images whose size is not a power of two. As a result, the image was improperly resized and blurry.
The technologist view page failed to disable forwarding, editing and deleting partial studies residing on multiple mount points.
Removing the processed data repository failed to change the processing status to frozen because the state change was not applied during the callback.
The user name filter drop down menu used the user name and user ID interchangeable, both in display and database query, leading to confusing results. The field permits users to enter user names and both the user name and user ID are displayed, but when the command is invoked, the value applied is the user ID.
When some non-visible characters, such as the left and right arrows, were entered into a user name field drop down panel, such as when filtering on the user name, the web page triggered an unnecessary call to search the database.
Some default items in the user name drop down menus are supposed to remain, even when the type-ahead string is applied, but they were filtered out.
When clicking out of a text entry field when configuring the default user in a web services device edit page, the style sheet was cleared and when clicking back into the text field, the user’s custom color setting was not applied.
On the manual forward setup page, the user name field (when forwarding to a folder) could become obstructed by the popup menu.
The report selection tools in the report view in the worklist’s patient folder were functioning incorrectly: the color scheme was hardcoded to dark theme; the report component icons failed to select the corresponding report component; and the delete button was highlighted instead of the report component icon. In addition to resolving these issues, a new button, Open All Reports, was added to load all the report components into a single view.
REVERSIBILITY NOTICE: To downgrade, the plugin license(s) must be regenerated.
The mammography, volume 3D and fusion plugins’ short names changed from the ones used in v7 so after upgrading, the plugin license was not recognized.
The tag list available when configuring calculated fields was unsorted, making it difficult to locate a specific tag.
A change to the time’s short format handler did not handle requests for negated search criteria.
Images having an aspect ratio other than 1:1 caused the technology view page’s carousel to show partial images and the scrolling tools to fail. They also rendered the page’s thumbnail image size options useless.
Attempting to open a study while it was still being acquired across multiple registration servers resulted in a race condition, causing the herpa data in the blob to reference more images than have been processed.
Attempting to collect the information in a PbR object failed from the app server because herelod only runs on the registration server. This affects some web services commands and other features such as editing a study from the worklist. A new intracom service was introduced to get PbR object content from a registration server.
A fix to the user manager added an unnecessary call to prompt for a login when loading the technologist view page or the web viewer page.
Uploading attachments to studies or orders completed without error but the attachment was not saved. This was due to a corrupted environment variable extended by the MCS component’s control script.
When collecting study data failed, the result did not contain a proper error, resulting in an exception.
While linked repositories are not recommended – mount points should be linked, not repositories – the configuration is permitted. When present, the system did not always attempt to resolve the link, resulting in failures when checking the study state.
GUI-initiated requests to reprocess or reheat a study were always performed by a single registration server. Now the system allocates these tasks in a round-robin fashion to distribute the load.
The local cache repository and its default configuration files are created during startup by the cases ctrl script, but the cases ctrl script isn’t invoked on the registration or stream servers. This function has been moved to the dcviewer ctrl script.
When toggling between the Security Settings page and other server configuration pages, the security page contents may refresh and overwrite the other page’s data because an asynchronous call might have taken too long to complete.
When adding cw3 support to the web viewer and technologist view pages, some new javascript pages were not included when running in debug mode.
The indicator on the user accounts page showing a user is logged in failed because the timestamp field type was changed in the database but the check wasn’t updated accordingly.
A retired function called when manipulating a report, such as unfinaling a report or removing an addendum, resulted in an exception. The retired function has been replaced with one supported by v9.
When a DICOM AE requests a study using a DICOM Retrieve request, the forward tasks could fail to apply the soft edit changes causing the data to be sent without the latest updates.
When using the local MCS service from a worklist server to create DICOM media containing studies from two or more different hub servers, the temporary directory names created on the hub servers did not always match the directory names on the worklist server. If the names were not unique, the conflict resulted in missing files. Additionally, the MCS started constructing the DICOMDIR file after the transfer from the first hub server completed, without waiting for transfers from all hub servers to complete.
A change to handing Boolean fields in the database was not extended to the user account lock state field, causing attempts to unlock a locked user account to fail.
While users are not supposed to open order or zero-image studies, requests to do so can occur and are handled. But the stream server failed to process these studies, resulting in a hang when attempting to open the viewer.
When the top item in the future queue was a prepstudy task for an active dcregupdate task, the task was postponed but the system failed to remove it from the top of the queue. Since the task manager only looked at the top item in the queue, task processing became deadlocked.
When the stream connection encounters an exception, such as an unexpected SSL exception, the viewer attempts to reestablish the connection by issuing a fast-connection token, but the server returns an invalid response, hanging the viewer as it waits indefinitely for the appropriate response.
The streaminfo log file was not rotated and continued to grow. The file has been added to the forever log rotation schedule.
Concurrent writes to the stream channel caused by the inclusion of streaming metric data in the data stream resulted in data corruption on the channel. This has been mitigated by submitting synchronous responses to incoming commands on a dedicated outbound queue. Additionally, a mechanism is in place to limit the data packet size. This control setting, if needed, would be assigned by the viewer.
When a hub server is backed up, the command to purge an order across the dotcom after correcting it to an image failed to propagate. As a result, the study could not be opened from the RIS because the search for the study returned multiple items (the study and the lingering orders).
Worklist filters for name fields, study size and multi-value fields have been updated to support features available in earlier versions, including the ability to search on individual name components.
The length of enumerated values assigned to a database field were not checked and resulted in unexpected values and results. The length is defined on the setup page and values lengths are enforced before saving them.
Uncaught exceptions coming from Internet Explorer were not handled properly, resulting in a web page exception.
The method used to open the help pages in a new window blocked pop-ups by default. The setting has been changed to allow the new tab to open without user acknowledgement.
In a dotcom where the master is the child server, a report edit could get processed on the child and propagated to the parent before the parent registered the original report. If a report notification event arrived at the parent before the report was registered, the event notification failed to trigger before all the fields were updated.
Given the special handling of static user accounts, such as the system account, the user account export script failed to export any information. The script now ignores static accounts.
The Study Changed field did not recognize saving report objects. As a result, the study fingerprint wasn’t updated and the change state remained untouched.
When creating or modifying a DICOM device entry using a duplicate AE Title and the user decides to ignore the warning and save it anyway, the software failed to apply the change because the override flag was ignored.
When a study update and study acquisition event occur within the same period, the study acquisition notification message could be suppressed due to the message reduction process. Now, study acquisition events are no longer collapsed with study update events.
The mechanism used to reconnect the persistent database connection was not implemented, resulting in database access errors.
Studies with compressed images greater than 1MB displayed corrupted (noisy) images because processing failed to buffer pages correctly.
The task manager could stop sending order notification messages to web service devices if the web services device is inaccessible and a message task was sent to retry but then deleted or suspended. When the web services device is again accessible, future messages would be collapsed behind the deleted retry task.
After correcting the importation of custom worklist layouts when upgrading from v7.2 to v8, the action buttons and lock indicator were dropped because v7.2 does not store them in the worklist configuration. By default, upgrades include the default v8 worklist action buttons.
If a saved worklist contains conditional coloring on a hidden column, an error occurs because the coloring tool cannot locate the column and the worklist appears as an empty list.
A user account’s password settings could be applied after making temporary changes to the account’s LDAP settings, even when the account was configured to use an LDAP authentication agent.
Pressing the More bar to display the next page of worklist entries could result in duplicate rows if the user has no default worklist defined and is in a group with an unsorted, unfiltered default worklist defined.
Processing a late-arriving object resulted in reprocessing existing objects’ initial quality blob (thumbnail) data because the herpa creator did not yet check for existing data.
The web viewer failed to launch on a Hyper+ farm system in which the stream server runs on a different server than the application/web server because the web viewer was passed only the web service ports and not the full server URL.
Processing large objects into blobs could result in corrupt data due to a missed lock.
The java component upgrade applied in Hyper+, replacing the old unix socket implementation, does not support the same socket options. When attempting to forward studies under certain conditions, an unsupported option caused an exception and the request failed.
Report view templates using a field with the VR of SI, such as the Interpretation Status ID field, would log errors and display the raw data because support for the VR type was removed.
Copying a study to the worklist folder failed because the data directory was not created, a result of moving the storestate.rec file from DICOM repository to the meta repository.
UPGRADE NOTICE: This change invalidates all blob (processed) data in the cache. A data value overflow condition existed in the blob header when the blob size exceeded 2GBs, causing blob creation (processing) to miss some images.
The taskd client canceled the keepalive timer when terminating the connection which prevented it from being restarted, causing the failure of reprocessing and reindexing requests.
When requesting to clear the cache from the Tech View page very quickly after loading the web page, a maintenance procedure might fail to complete before starting clearing the cache, resulting in an error and the cache data remaining in the repository.
When registering the PbR before the image objects, the value of the Date field could display the PbR’s creation date-time rather than the image object’s study date-time because the calculation of the Date field from the minimum SOP instance was not performed.
A low level lock timer created a condition that limited the number of times the system could attempt to release a reference counter, yet during certain real world scenarios, more attempts are needed. As a result, reference counts were not released, causing an inconsistent state in the data.
Some documented MySQL exceptions occurred but the recommended solution – retry the query/update – was not applied.
Multiple collapsed prepStudy tasks could exist in the task queue at the same time due to a racing condition when creating these tasks.
prepStudy tasks in the retry queue could not be terminated by the post-collapse cleanup function.
After upgrading Chromium (used by Chrome, Edge, Safari and other browsers) to Version 106.0.5249.103 or later, some of the browser’s drag and drop features such as applying a worklist filter from a column header or column value corrupted the web page contents resulting in a disorganized layout.
A change in the DCMKT toolkit required connection timeouts to be assigned earlier than they were. As a result, all but the first send request and all the receive requests used the built-in timeout value.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
If the Institution Name value contains an apostrophe and the field is used in the filter criteria applied on the worklist, the open next/previous study command results in an exception due to an improperly formed query.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
The system ignored global restrictions when selecting the next/previous study. If the user does not have permission to view the study selected, he/she ends up with an invalid study error.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
A typo existed in the term “ForwardStudy” in log file entries for a web services study forward event.
Newly created document types failed to show up on the document type configuration page until after a browser page refresh.
The viewer version number in the uploaded viewer logs was incorrect because the data was taken from the wrong object.
When a registered viewer device issues a cache state request to the server and the server does not have an explicitly defined prior study cache state setting, the parsing algorithm misinterprets the parsing results and registers an unnecessary exception in the log file.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
The initial dotcom setup processed failed to include the server’s self ID setting in the default configuration file. As a result, the support account was not recognized.
A reorganization of the component start up scripts broke the setup of the default pb-scp configuration file, resulting in appending the wrong defaults to the end of the configured settings which were then taken as the configured value.
Actions failed to run because the path used to identify the curl script used the removed custom component. The path has been updated to use the OS-supplied curl tools.
The persistent database connections would not reconnect if the connection was lost or timed out, resulting in retried attempts to register objects, among other incomplete database requests.
When attempting to acquire and register large numbers of objects in a short period of time, herelod processes failed to terminate cleanly, waiting unnecessarily on the release of conditional variables, resulting in failed registration tasks and dropped objects.
After upgrading java, an incompatible JAX-WS file caused all web services commands to fail. Upgraded JAX-WS to version 2.3.5.
The location of java has moved but the path variable using in multiple scripts, including the user account import and export tools, still pointed to the former location.
Persistent and non-persistent database connections would release the SQL library object when terminating, invalidating the persistent connection and causing unstable behavior in other threads.
The updated version of MySQL, using the carry-over settings, treats truncation as an error as opposed to truncating the value automatically. The settings have been updated to default to the previous truncation behavior.
A study with no cache or processed data directories, e.g., a study with just a PbR object, encountered an exception when preparing the meta data because of a failure to examine the return code value.
If the user manager runs on an independent server, login attempts fail because the software was not properly prepared to pass data between discrete objects correctly.
Clearing cache from the Technologist view page deleted the data but failed to update the internal processing state value because the feature to track processing status across multiple servers was not yet implemented.
Reindexing a study fails when the ingestion and application components run on separate servers because the application server has no registration abilities. The registration request is submitted to one of the registration servers.
The stream server the viewer uses to download the data is defined in the herpa data but the early v9 viewer does not use the value yet. Until this is available, the server will leave the setting empty if it determines the stream service and the web service are running on the same server, forcing the viewer to fall back on its assumption they are the same. Note this solution only works when running stream and web services on the same server. When the services are separated, an updated viewer is required.
Static user passwords failed to account for the updated hash format.
Updating the password hash format missed a few places, including the Change Password page.
Improper handling of a return code resulted in clearing the action history file when a database query encountered an anomaly or simply failed to complete.
Parsing the date-time values in patient folder notes assumed a 12-hour clock rather than a 24-hour clock.
Correcting a study to an order might fail because orders don’t have an owner hub, which is required to determine where the combined study resides. As a result, manual corrections from the GUI reported a failure to the user.
A missing RMI call caused the server correcting a late-arriving order to a study to not update the study data with the order data if the correcting server was not the study owner.
If the shutdown process encountered an exception, which could be legitimate depending on the timing/sequencing, it could exit before terminating hermes.
Back-end support for forwarding studies from the patient folder was incomplete, resulting in no action when the button was clicked.
The query time reported in a slow query log entry was in seconds but tagged as being in milliseconds.
User accounts with empty passwords, which is not a valid state, could not be corrected because the missing password was not handled and caused an error when editing.
Duplicate tasks that weren’t collapsed were not returned to the duplicate task map, causing them to remain in the retry queue until executed. Under certain conditions, the retry queue could grow large with unnecessary duplicate tasks.
While looking for plugin license files, the system failed to recognize plugins that were not distributed as DLL files.
Identifying the email address offered as the default when configuring a Notify action failed for internally-defined accounts, such as the system account. In this case, the default comes up empty, requiring the user to explicitly declare the email recipient.
If the tasks page’s filter panel was open when the user called up a different web page, the filter panel was not closed and remained on the screen.
Viewer sessions were not recognized after restarting Apache, causing the cached data fields (eg, Percent Loaded) to report no data until the user logged out and back in.
Searching a worklist using a date value in the quick filter field that would result in a query qualifier exception returned an error message and no data because the date filter was improperly encoded in the database search request.
The compress data action used a static pathname to the study rather than using the repository’s location finder. If the study was moved from its original location, the compress action could not locate it and therefore failed to process the data.
A system lock failed to be released because of a missing constructor. The constructor has been added to avoid the stuck lock. Also, when a user attempted to break the system lock, which is not permitted, the user received no explanation of why the lock remained. Now the user is informed he has no permissions to break a system lock.
When using a custom port to launch the web viewer, the server would mishandle parsing the URL to locate the host name, causing the request to fail.
The plugin licensing enhancements failed to recognize custom plugin modules because the new naming rules were not applied correctly to custom plugin file names.
When using the web services command to create a user and including the password option tag but specifying no options, the request would fail because the server could not parse the empty string from the request.
The delete button was not available from the patient folder if the study contained more than one report object (i.e., there was an addendum to the main report).