eRAD
PACS Server
When creating media from studies existing on different hub servers, a valid yet irrelevant exception gets logged indicating a failed attempt to generate an ID for an auto-increment field.
The Apache monitor could fail if it issues a query for a thread name after the thread has already terminated.
The farm validation page didn’t recognize the color scheme setting and presented dark text when using dark mode.
A separate thread is spawned to handle gwav decompression in the web and tech viewers. In some cases, additional threads were spawn unnecessarily and the existing ones were left unmanaged. Additionally, buffers exchanged between the threads were not released correctly, resulting in a memory leak.
The task page limited the amount of task data it could display. When large task queues existed or when consolidated task data from many servers was abundant, the buffer size could be exceeded, resulting in an exception and truncated results.
After adding a custom database field, no localized value exists in the resource file and that generated an error message in the logs.
An error message was mistakenly created in the weekly log when the user changed some preference settings.
Hounsfield annotation was performed on the server using the raw data files. Since v9 eliminated raw files, the tool fails, resulting in an error message in the web viewer. The annotation feature has been refactored to use client-side image data.
If a system message occured when the user has a curtain (popup) panel displayed from the Preferences page, the message content was empty.
The repository handler had debug logging enabled by default. It has been changed to be disabled by default.
The stream server’s packet assembly process could get stuck in a loop when the last entry has multiple entries for the same file and processing for the file fails. The status is not propagated to other processes of the same file, causing these other processes to wait endlessly.
Insufficient checking of a return code permitted some system lists (ie, owned by the @system account) to appear on a non-admins saved filter list.
The warning message about password strength indicated a feature that is no longer supported. The message has been updated to reflect the current solution.
Some tools available on the report page broke when the profile file format changed to XML. These include the field to show the radiologist, to show the transcriptionist and to select the key image size.
Log entries indicating an action was performed on a study included in invalid, fixed-text indication that the study state changed.
Compress action tasks can fail and go onto the failed queue if the study is purged before or while the task is running.
DICOM media requests, from any source, that specified a series or object that was not part of the default – typically, the first – series or object failed because the assigned directory identifier was defined using the default’s ID. Since the default’s series/object was not present, the directory could not be located when building the media file.
A function used to display the results of a search didn’t check the user’s permissions, allowing someone to misappropriately use the URL to access restricted data.
Task page filters on the name fields returned no matches. The filter function on the Tasks page supported simple text filters only. Now they support the more complex name filter as well.
Timed-out database connections in idle stream server threads could result in a (regserver) crash when multiple stream servers start running again.
Unprotected thread handling around database connections could cause system components that use the repo handler, including taskd, apache and regserver, to crash when they run after being idle for a period of time (about eight hours, or longer than the mysql wait timeout period).
When acquiring objects, the study directory was created before the coercion rules were applied. If the coercion rule instructed the system to drop (i.e., not register) the object, an empty study folder could persist. The system now creates the study directory only after it knows it’s going to register the object.
The scope of the dirty flag handler changed when the system started caching repository handler instances. It now has to check whether other threads modified the dirty file. Additionally, an unnecessary smart semaphore lock, created when the dirt flag handler accesses its own cached dirty flag, has been changed to a simple memory lock to avoid a strain on resources.
A cached database connection providing efficient access from the repository handler was not being used by the check overload function.
Processing a wormhole (data takeover) notification to delete a study which does not exist on the Replica server failed because the study’s meta directory didn’t exist. As a result, subsequent notifications from the Origin server could not be sent.
The absence of a default value for the Prepared Study database field resulted in it being assigned NULL for each study registered through the wormhole (data takeover), preventing the column from appearing on the worklist.
DEPENDENCY NOTICE: This fix requires an Origin-side fix. For v7.2, the fix is in 7.2 medley-97. A Replica system received sync messages from multiple hubs when the study was broken (ie, resided on multiple hubs) on an Origin system. One message indicated additional objects exist. The other indicates objects and even the study were deleted. Depending on the order these messages arrived at the Replica, some objects could remain unregistered.
A change to the file name extension of compressed data files was not applied to blob file lookups, causing requests to download JPG images to fail.
The tool used to parse meta data objects ignored empty trailing fields, truncating the data when it was updated. The truncated data caused data import (during takeover) to fail.
When the training fields in the object meta files are empty, the system truncated the data resulting in a failure to read them.
Studies on an origin server with a process mode state set to Store failed to create cache or processed data on the replica server. Since v9 always processes data, the setting is ignored during takeover.
If a group open request includes an order, the viewer loads the studies, including the order, but the images failed to show up because the order contained no blob data, and it halted the streaming of all images.
If a server farm consists of multiple servers but does not include a load balancer, typically because only one server is used to perform each defined role, an uncommon but valid configuration commonly used in validation testing, the intracom client failed to run because it required the presence of a load balancer.
Reheating and reindexing a study during data takeover could find and register temporary files from the study directory, causing duplication of some objects.
Some special characters, including apostrophe and backslash, in text strings were inserted into the database preceded by a backslash. When displayed in the worklist, the extra backspace character appeared.
UPGRADE NOTICE: This fix applies to new data only. Existing studies affected by the bug must be cleared from the object cache table. If the last field in the object table data is empty when the object is swapped out of the meta database, the data was truncated and the subsequent load would be aborted.
When in takeover mode, the replica server failed to create orders the origin server sends over because the replica server, whose storage is read-only, attempted to create the folder.
Modifying a filtered list assigned to an action caused the action to mishandle the current content setting, causing the action to be applied to existing data regardless of the setting.
When in takeover mode, users and the system are unable to create an order on the replica mode because it attempted to create the study repo itself rather than passing the request to the origin server.
When reheat tasks timed out, they were treated as generic registration errors and sent to the failed queue rather than the retry queue.
A mishandled parameter in a call to remove a study from the action processing table prevented studies that no longer match the filter criteria from being removed from the table, consuming resources indefinitely.
The data structure for storing the result of a MySQL query was unbound before executing the query, causing a write to be applied to undefined memory space, resulting in a crash of taskd.
If the studies on an action list do not change between action events, the action fails to execute because the check for an empty array was performed before the list was converted to an array.
Some monitoring tool, specifically Time of SQL Query (s), Memory Usage and Memory Usage Actual, mishandled the input data format, resulting in invalid or no output graphs.
Importing users and groups from v7.2 failed because the group table name changed, empty table checking was missing and the action filter table was missing an ID field.
When applying a (worklist) table filter by dragging a value into the filter criteria area, the COLID could be missing, resulting in an exception and a failed query.
When applying compound lists, it was possible to miss records that satisfied one of the lists but not the other if the second list included criteria that excluded the records on the first list. By handling the query as a union of separate lists, all matching studies are included.
Report submitted from the viewer failed to insert the Study Date and Study Time values since changing the database date/time record format.
A security vulnerability occurring when using the forgotten password feature has been eliminated.
The option to exclude devices in the device import script, importdevices.sh, mistakenly applied to DICOM devices only. Now all devices are checked against the exclusion list.
Some calls to retrieve a repository’s absolute path failed if the repository root itself was a symbolic link.
When the repository root is the same as the repository mount point, temporary files are placed in the repository root directory rather than the tmp directory. When the object was moved to the repository, the system attempted to remove the file from the data tmp directory, instead of the repository root directory, leaving unmanaged files in the data tmp directory.
The initial quality (IQ) images were saved to the processed repository on (slow) tier 3 storage rather than the local cache repository (on fast tier 1 storage).
When a study is processed in parallel on multiple threads, locking controls were inefficient because they took a long time between retries.
An open viewer with an open patient folder panel whose viewer session has timed out issues refresh calls, resulting in exceptions recorded in the error log.
When an SQL exception occurred while searching the database, it could be mishandled and, in some cases, clear an action’s “done” list. The next time the action performed the query successfully, all the studies would get (re)processed.
When creating a new web service device, the default user from an existing web services device would be inserted at the default user of the new device.
Idle stream connections were entering a sleep mode that was not yielding sufficient CPU cycles.
If purging is enabled for an NFS shared drive, makespace() failed to run because the tool used to collect the disk usage data does not work with NFS-mounted devices. The tool now uses the mounted directory instead of the device.
Under some conditions, perhaps mostly when reheating a study, when the system completes multiple tasks within a short period of time, the task counts were not updated correctly and some completed jobs remained visible on the Tasks page.
When cache processing encountered an error, the error was handled correctly but the status was set to Cooked, regardless. Error values are checked now, assuring the status reflects the processing results.
No solution was in place to trigger reheating a cold study after acquiring new objects.
Studies with no images were excluded from the cooking process, even though they require herpa data and empty blobs before a user can open them.
When passing information to the viewer about the next/previous studies on the user’s worklist, the server mishandled zero-image studies. As a result, the viewer might incorrectly disable the next/previous study buttons.
The action filter states in the Other Lists filter panel page have been changed from a text field to a list of enumerated values.
Some toolbox functions call themselves redundantly, leading to a possible global locking issue. These locks have been changed to local locks to eliminate the possibility of a lock up.
The number of connections a stream server supports has been increased to 8192. Note that each connection requires two threads, making 4096 the maximum number of simultaneous user connections.
None of the viewer files, including the viewer executable itself, were copied to DICOM media because after moving the media option settings to the database, the setting values were not converted properly to Boolean values and therefore misinterpreted when creating the media.
The access key inserted into the PBS file (to support stream server session authentication) was put in the wrong location, causing the viewer to misinterpret the study list when initiating a new session.
When configured as a server farm, the stream server and application server are separate and the session ID managed by the application server is unavailable to streaming connections. As a result, web viewer access from a stream server could not be authenticated until this change, which passes the session ID in the streaming protocol.
The preferable web assembly code failed to load because the web viewer interface was missing a MIME type definition, forcing the web viewer to fall back to a sub-optimal technology.
Redundant and time-consuming calls to obtain an object’s repository location were removed because the study location doesn’t change.
Herpa creation tasks preparing a study for cooking recursively locked the cache repository, causing timeout delays.
Tasks that restore an object table record from the meta data could crash deep within JNI when invoking a gRPC client in JNI after database operations have also been performed in JNI. To avoid the situation, object cache mapping has been reimplemented in C/C++ to avoid invoking gRPC from JNI.
The inclusion of an unnecessary session ID when calling the PDF creation tool cause the conversion script into an infinite loop and ultimate failure when creating DICOM media.
The inconsistent order of locking and unlocking of two different locks when reprocessing a study’s data could cause the task manager to become deadlocked.
After increasing the default MySQL connection limits (see HPS-371, released in this build), it was determined a single default is not sufficient. A better connection limit default for the system was the original 4, so this setting has been restored. Default limits for Tomcat and Hermes are now set to 32. Also, the connection pool now creates connections as needed meaning none are initialized by default. Default limits for other java VMs can be defined using the override file ~/var/conf/modules.xml. See Jira for a list of affected java VMs and configuration details.
The pattern used for matching MySQL’s version number changed in the current version, resulting in invalid error messages in the log file.
Study row selection on the worklist could become inconsistent, resulting in misapplication of a batch tool.
Merging two or more studies into a new study which is then merged with a different study, followed by a delete request could leave invalid state data in the database due to a missing lock when processing the merge and delete requests, preventing the registration of the original studies if resent.
While the load balance server doesn’t use the remote database or a local database, it does generate logging data and that data is logged in the global database. As a result, the load balancer server requires the mysql component.
Media import had not been updated to support the server farm roles, attempting to upload the data to the application server for processing. This feature has been updated to upload the media data to the shared temporary repository and the command to perform the import is submitted to the registration server.
Exported worklists could be downloaded without an active user session if the user manually constructed the applicable URL in a browser window.
The updated DCMTK toolkit changed its behavior processing the sample per pixels value defined in YBR_FULL_422 multi-frame objects, resulting in an error calculating the full image size. A workaround has been applied that intercepts affected image objects and calculates the full image size as defined by the object.
Object level log entries were incorrectly included in the log database. This has been corrected so they appear in the forever logs only.
The load balancer server’s configuration used hostnames rather than IP addresses, which won’t work at sites which are not set up to resolve FQDN. The generator script now uses IP addresses when available and falls back on hostnames when not.
User-initiated study delete requests could cause taskd to lock up when a deleted task attempts to add a new cleanup task.
A recent bug fix prevented a RIS user from opening Completed orders in the viewer or web viewer. Support for this behavior has been restored.
When installing a server from scratch, the hyperdirector RPM is pulled in as a dependency but isn’t started, causing a failure during startup since it is expected to be running.
Failure to pick up a modified environment variable before starting the hyperdirector caused the server validator to fail.
Changes applied to user session management within a server farm were not applied to the performance monitor page, resulting in an exception.
The spinner graphic displayed in the terminal window when running the startup script dumped multiple newlines on the screen because the animated character required multibyte character set support, which wasn’t applied by default. The character has been replaced with three dots to indicate the task is in process.
When the user changes some settings, a session refresh updates those settings so they take effect immediately. Changing the assigned viewer was missing from this list of settings. As a result, changes to the applicable viewer didn’t occur until the user initiated a new web session.
The Move Left button on the group member edit page was placed at the midpoint in the group list. On systems with many groups, this placed the button off the initial screen. The button has been moved to the top of the list.
When editing a notification action assigned to a system list, the target email list appeared blank rather than listing the notification recipients.
If no default document type was assigned to the server, the attachment upload GUI did not filter the other settings on the page, allowing users to assign unsupported combinations of settings and causing some uploads to fail.
A recent change to display reports in the worklist patient folder was applied too broadly, affecting old style indexing used by the viewer’s patient folder, making external reports unavailable from the viewer's patient folder window.
Changes in the DCMTK toolkit allowed the system to generate UIDs longer than the maximum field size. The algorithm for generating UIDs has been modified so all UIDs are unique and within the permitted length.
The repository handler uses database locking but when it cannot connect to the database, it results in an uncaught exception that screws up the locking mechanism.
When reprocessing a study containing no processed repository, the jit processing routine erroneously created legacy thumbnail images.
The system default calculated fields for the Corrected state (p0000) and Report Exist state (p0002) failed to appear in the configuration page, could not be modified and were unavailable as a worklist column. The data field types changed but the new types were not handled by the database.
While an empty email address is optional in some notification email configurations, it is required when the list owner is the system account. In these cases, users are prevented from activating the action until an email address is provided.
A client side exception occurred when the user logged out immediately after logging in, before the worklist could display.
When a worklist refresh occurs (manual or automatic) while editing a report in the patient folder on a worklist on which a study disappears, reordering the worklist rows refreshed the report edit page as well, clearing report data that might have been entered.
The user account lock status and the login details reported incorrect information when the user selected the account by checking the selection check box at the beginning of the row. The information was also inconsistent when multiple accounts were selected.
When loading a dashlet, an exception could occur after login due to failure to check for an initialized variable.
When a user-initiated task-related action, such as changing the priority of a scheduled task, incurred an error, the return code was mischaracterized, leaving a lock in place. As a result, new tasks would not run.
Autocorrecting studies to orders using patient name as matching criteria and a patient name containing an apostrophe resulted in a search exception and a failure to autocorrect.
The Partially Inaccessible indicator on the worklist could report an incorrect state if the user clicked the More button to display additional studies while the system was still acquiring them. The call to set the state failed because the page did not handle the request.
The startup script returned the global result variable rather than the local result variable after starting each service on each server in a server farm, resulting in a success status code even when one or more servers failed to start.
Global restrictions were unintentionally blocking non-study related data from the Logs page.
When upgrading from v7.2 to v9, worklist filter lists whose names are too long to fit in the database are skipped, with a warning displayed on the console window. A problem in this handling caused the subsequent list to be ignored, causing the upgrade to drop it as well, without any warning.
The hyperlink in a notification email that launches the images in the web viewer was missing the login prompt. If a browser did not have a valid session cookie already, the web viewer failed to load images.
REVERSIBILITY NOTICE: This change requires regenerating all processed and cached data to a format which is incompatible with previous software versions.
DEPENDENCY NOTICE: This change requires viewer 9.0.4.4 or later.
Under very specific and unlikely conditions, the compression algorithm could encounter a matrix boundary condition that caused the compression effort to fail, resulting in no processed image.
The algorithm for resizing images to fit them in the available web viewer frame might terminate prematurely for images whose size is not a power of two. As a result, the image was improperly resized and blurry.
The technologist view page failed to disable forwarding, editing and deleting partial studies residing on multiple mount points.
Removing the processed data repository failed to change the processing status to frozen because the state change was not applied during the callback.
The user name filter drop down menu used the user name and user ID interchangeable, both in display and database query, leading to confusing results. The field permits users to enter user names and both the user name and user ID are displayed, but when the command is invoked, the value applied is the user ID.
When some non-visible characters, such as the left and right arrows, were entered into a user name field drop down panel, such as when filtering on the user name, the web page triggered an unnecessary call to search the database.
Some default items in the user name drop down menus are supposed to remain, even when the type-ahead string is applied, but they were filtered out.
When clicking out of a text entry field when configuring the default user in a web services device edit page, the style sheet was cleared and when clicking back into the text field, the user’s custom color setting was not applied.
On the manual forward setup page, the user name field (when forwarding to a folder) could become obstructed by the popup menu.
The report selection tools in the report view in the worklist’s patient folder were functioning incorrectly: the color scheme was hardcoded to dark theme; the report component icons failed to select the corresponding report component; and the delete button was highlighted instead of the report component icon. In addition to resolving these issues, a new button, Open All Reports, was added to load all the report components into a single view.
REVERSIBILITY NOTICE: To downgrade, the plugin license(s) must be regenerated.
The mammography, volume 3D and fusion plugins’ short names changed from the ones used in v7 so after upgrading, the plugin license was not recognized.
The tag list available when configuring calculated fields was unsorted, making it difficult to locate a specific tag.
A change to the time’s short format handler did not handle requests for negated search criteria.
Images having an aspect ratio other than 1:1 caused the technology view page’s carousel to show partial images and the scrolling tools to fail. They also rendered the page’s thumbnail image size options useless.
Attempting to open a study while it was still being acquired across multiple registration servers resulted in a race condition, causing the herpa data in the blob to reference more images than have been processed.
Attempting to collect the information in a PbR object failed from the app server because herelod only runs on the registration server. This affects some web services commands and other features such as editing a study from the worklist. A new intracom service was introduced to get PbR object content from a registration server.
A fix to the user manager added an unnecessary call to prompt for a login when loading the technologist view page or the web viewer page.
Uploading attachments to studies or orders completed without error but the attachment was not saved. This was due to a corrupted environment variable extended by the MCS component’s control script.
When collecting study data failed, the result did not contain a proper error, resulting in an exception.
While linked repositories are not recommended – mount points should be linked, not repositories – the configuration is permitted. When present, the system did not always attempt to resolve the link, resulting in failures when checking the study state.
GUI-initiated requests to reprocess or reheat a study were always performed by a single registration server. Now the system allocates these tasks in a round-robin fashion to distribute the load.
The local cache repository and its default configuration files are created during startup by the cases ctrl script, but the cases ctrl script isn’t invoked on the registration or stream servers. This function has been moved to the dcviewer ctrl script.
When toggling between the Security Settings page and other server configuration pages, the security page contents may refresh and overwrite the other page’s data because an asynchronous call might have taken too long to complete.
When adding cw3 support to the web viewer and technologist view pages, some new javascript pages were not included when running in debug mode.
The indicator on the user accounts page showing a user is logged in failed because the timestamp field type was changed in the database but the check wasn’t updated accordingly.
A retired function called when manipulating a report, such as unfinaling a report or removing an addendum, resulted in an exception. The retired function has been replaced with one supported by v9.
When a DICOM AE requests a study using a DICOM Retrieve request, the forward tasks could fail to apply the soft edit changes causing the data to be sent without the latest updates.
When using the local MCS service from a worklist server to create DICOM media containing studies from two or more different hub servers, the temporary directory names created on the hub servers did not always match the directory names on the worklist server. If the names were not unique, the conflict resulted in missing files. Additionally, the MCS started constructing the DICOMDIR file after the transfer from the first hub server completed, without waiting for transfers from all hub servers to complete.
A change to handing Boolean fields in the database was not extended to the user account lock state field, causing attempts to unlock a locked user account to fail.
While users are not supposed to open order or zero-image studies, requests to do so can occur and are handled. But the stream server failed to process these studies, resulting in a hang when attempting to open the viewer.
When the top item in the future queue was a prepstudy task for an active dcregupdate task, the task was postponed but the system failed to remove it from the top of the queue. Since the task manager only looked at the top item in the queue, task processing became deadlocked.
When the stream connection encounters an exception, such as an unexpected SSL exception, the viewer attempts to reestablish the connection by issuing a fast-connection token, but the server returns an invalid response, hanging the viewer as it waits indefinitely for the appropriate response.
The streaminfo log file was not rotated and continued to grow. The file has been added to the forever log rotation schedule.
Concurrent writes to the stream channel caused by the inclusion of streaming metric data in the data stream resulted in data corruption on the channel. This has been mitigated by submitting synchronous responses to incoming commands on a dedicated outbound queue. Additionally, a mechanism is in place to limit the data packet size. This control setting, if needed, would be assigned by the viewer.
When a hub server is backed up, the command to purge an order across the dotcom after correcting it to an image failed to propagate. As a result, the study could not be opened from the RIS because the search for the study returned multiple items (the study and the lingering orders).
Worklist filters for name fields, study size and multi-value fields have been updated to support features available in earlier versions, including the ability to search on individual name components.
The length of enumerated values assigned to a database field were not checked and resulted in unexpected values and results. The length is defined on the setup page and values lengths are enforced before saving them.
Uncaught exceptions coming from Internet Explorer were not handled properly, resulting in a web page exception.
The method used to open the help pages in a new window blocked pop-ups by default. The setting has been changed to allow the new tab to open without user acknowledgement.
In a dotcom where the master is the child server, a report edit could get processed on the child and propagated to the parent before the parent registered the original report. If a report notification event arrived at the parent before the report was registered, the event notification failed to trigger before all the fields were updated.
Given the special handling of static user accounts, such as the system account, the user account export script failed to export any information. The script now ignores static accounts.
The Study Changed field did not recognize saving report objects. As a result, the study fingerprint wasn’t updated and the change state remained untouched.
When creating or modifying a DICOM device entry using a duplicate AE Title and the user decides to ignore the warning and save it anyway, the software failed to apply the change because the override flag was ignored.
When a study update and study acquisition event occur within the same period, the study acquisition notification message could be suppressed due to the message reduction process. Now, study acquisition events are no longer collapsed with study update events.
The mechanism used to reconnect the persistent database connection was not implemented, resulting in database access errors.
Studies with compressed images greater than 1MB displayed corrupted (noisy) images because processing failed to buffer pages correctly.
The task manager could stop sending order notification messages to web service devices if the web services device is inaccessible and a message task was sent to retry but then deleted or suspended. When the web services device is again accessible, future messages would be collapsed behind the deleted retry task.
After correcting the importation of custom worklist layouts when upgrading from v7.2 to v8, the action buttons and lock indicator were dropped because v7.2 does not store them in the worklist configuration. By default, upgrades include the default v8 worklist action buttons.
If a saved worklist contains conditional coloring on a hidden column, an error occurs because the coloring tool cannot locate the column and the worklist appears as an empty list.
A user account’s password settings could be applied after making temporary changes to the account’s LDAP settings, even when the account was configured to use an LDAP authentication agent.
Pressing the More bar to display the next page of worklist entries could result in duplicate rows if the user has no default worklist defined and is in a group with an unsorted, unfiltered default worklist defined.
Processing a late-arriving object resulted in reprocessing existing objects’ initial quality blob (thumbnail) data because the herpa creator did not yet check for existing data.
The web viewer failed to launch on a Hyper+ farm system in which the stream server runs on a different server than the application/web server because the web viewer was passed only the web service ports and not the full server URL.
Processing large objects into blobs could result in corrupt data due to a missed lock.
The java component upgrade applied in Hyper+, replacing the old unix socket implementation, does not support the same socket options. When attempting to forward studies under certain conditions, an unsupported option caused an exception and the request failed.
Report view templates using a field with the VR of SI, such as the Interpretation Status ID field, would log errors and display the raw data because support for the VR type was removed.
Copying a study to the worklist folder failed because the data directory was not created, a result of moving the storestate.rec file from DICOM repository to the meta repository.
UPGRADE NOTICE: This change invalidates all blob (processed) data in the cache. A data value overflow condition existed in the blob header when the blob size exceeded 2GBs, causing blob creation (processing) to miss some images.
The taskd client canceled the keepalive timer when terminating the connection which prevented it from being restarted, causing the failure of reprocessing and reindexing requests.
When requesting to clear the cache from the Tech View page very quickly after loading the web page, a maintenance procedure might fail to complete before starting clearing the cache, resulting in an error and the cache data remaining in the repository.
When registering the PbR before the image objects, the value of the Date field could display the PbR’s creation date-time rather than the image object’s study date-time because the calculation of the Date field from the minimum SOP instance was not performed.
A low level lock timer created a condition that limited the number of times the system could attempt to release a reference counter, yet during certain real world scenarios, more attempts are needed. As a result, reference counts were not released, causing an inconsistent state in the data.
Some documented MySQL exceptions occurred but the recommended solution – retry the query/update – was not applied.
Multiple collapsed prepStudy tasks could exist in the task queue at the same time due to a racing condition when creating these tasks.
prepStudy tasks in the retry queue could not be terminated by the post-collapse cleanup function.
After upgrading Chromium (used by Chrome, Edge, Safari and other browsers) to Version 106.0.5249.103 or later, some of the browser’s drag and drop features such as applying a worklist filter from a column header or column value corrupted the web page contents resulting in a disorganized layout.
A change in the DCMKT toolkit required connection timeouts to be assigned earlier than they were. As a result, all but the first send request and all the receive requests used the built-in timeout value.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
If the Institution Name value contains an apostrophe and the field is used in the filter criteria applied on the worklist, the open next/previous study command results in an exception due to an improperly formed query.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
The system ignored global restrictions when selecting the next/previous study. If the user does not have permission to view the study selected, he/she ends up with an invalid study error.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
A typo existed in the term “ForwardStudy” in log file entries for a web services study forward event.
Newly created document types failed to show up on the document type configuration page until after a browser page refresh.
The viewer version number in the uploaded viewer logs was incorrect because the data was taken from the wrong object.
When a registered viewer device issues a cache state request to the server and the server does not have an explicitly defined prior study cache state setting, the parsing algorithm misinterprets the parsing results and registers an unnecessary exception in the log file.
Some operations could be copied from the browser’s network panel and invoked from another browser by a user with different permissions, allowing users to perform unpermitted operations. The missing permission check has been applied.
The initial dotcom setup processed failed to include the server’s self ID setting in the default configuration file. As a result, the support account was not recognized.
A reorganization of the component start up scripts broke the setup of the default pb-scp configuration file, resulting in appending the wrong defaults to the end of the configured settings which were then taken as the configured value.
Actions failed to run because the path used to identify the curl script used the removed custom component. The path has been updated to use the OS-supplied curl tools.
The persistent database connections would not reconnect if the connection was lost or timed out, resulting in retried attempts to register objects, among other incomplete database requests.
When attempting to acquire and register large numbers of objects in a short period of time, herelod processes failed to terminate cleanly, waiting unnecessarily on the release of conditional variables, resulting in failed registration tasks and dropped objects.
After upgrading java, an incompatible JAX-WS file caused all web services commands to fail. Upgraded JAX-WS to version 2.3.5.
The location of java has moved but the path variable using in multiple scripts, including the user account import and export tools, still pointed to the former location.
Persistent and non-persistent database connections would release the SQL library object when terminating, invalidating the persistent connection and causing unstable behavior in other threads.
The updated version of MySQL, using the carry-over settings, treats truncation as an error as opposed to truncating the value automatically. The settings have been updated to default to the previous truncation behavior.
A study with no cache or processed data directories, e.g., a study with just a PbR object, encountered an exception when preparing the meta data because of a failure to examine the return code value.
If the user manager runs on an independent server, login attempts fail because the software was not properly prepared to pass data between discrete objects correctly.
Clearing cache from the Technologist view page deleted the data but failed to update the internal processing state value because the feature to track processing status across multiple servers was not yet implemented.
Reindexing a study fails when the ingestion and application components run on separate servers because the application server has no registration abilities. The registration request is submitted to one of the registration servers.
The stream server the viewer uses to download the data is defined in the herpa data but the early v9 viewer does not use the value yet. Until this is available, the server will leave the setting empty if it determines the stream service and the web service are running on the same server, forcing the viewer to fall back on its assumption they are the same. Note this solution only works when running stream and web services on the same server. When the services are separated, an updated viewer is required.
Static user passwords failed to account for the updated hash format.
Updating the password hash format missed a few places, including the Change Password page.
Improper handling of a return code resulted in clearing the action history file when a database query encountered an anomaly or simply failed to complete.
Parsing the date-time values in patient folder notes assumed a 12-hour clock rather than a 24-hour clock.
Correcting a study to an order might fail because orders don’t have an owner hub, which is required to determine where the combined study resides. As a result, manual corrections from the GUI reported a failure to the user.
A missing RMI call caused the server correcting a late-arriving order to a study to not update the study data with the order data if the correcting server was not the study owner.
If the shutdown process encountered an exception, which could be legitimate depending on the timing/sequencing, it could exit before terminating hermes.
Back-end support for forwarding studies from the patient folder was incomplete, resulting in no action when the button was clicked.
The query time reported in a slow query log entry was in seconds but tagged as being in milliseconds.
User accounts with empty passwords, which is not a valid state, could not be corrected because the missing password was not handled and caused an error when editing.
Duplicate tasks that weren’t collapsed were not returned to the duplicate task map, causing them to remain in the retry queue until executed. Under certain conditions, the retry queue could grow large with unnecessary duplicate tasks.
While looking for plugin license files, the system failed to recognize plugins that were not distributed as DLL files.
Identifying the email address offered as the default when configuring a Notify action failed for internally-defined accounts, such as the system account. In this case, the default comes up empty, requiring the user to explicitly declare the email recipient.
If the tasks page’s filter panel was open when the user called up a different web page, the filter panel was not closed and remained on the screen.
Viewer sessions were not recognized after restarting Apache, causing the cached data fields (eg, Percent Loaded) to report no data until the user logged out and back in.
Searching a worklist using a date value in the quick filter field that would result in a query qualifier exception returned an error message and no data because the date filter was improperly encoded in the database search request.
The compress data action used a static pathname to the study rather than using the repository’s location finder. If the study was moved from its original location, the compress action could not locate it and therefore failed to process the data.
A system lock failed to be released because of a missing constructor. The constructor has been added to avoid the stuck lock. Also, when a user attempted to break the system lock, which is not permitted, the user received no explanation of why the lock remained. Now the user is informed he has no permissions to break a system lock.
When using a custom port to launch the web viewer, the server would mishandle parsing the URL to locate the host name, causing the request to fail.
The plugin licensing enhancements failed to recognize custom plugin modules because the new naming rules were not applied correctly to custom plugin file names.
When using the web services command to create a user and including the password option tag but specifying no options, the request would fail because the server could not parse the empty string from the request.
The delete button was not available from the patient folder if the study contained more than one report object (i.e., there was an addendum to the main report).