Content area

Abstract

Amid the advancements in computer-based chemical process modeling and simulation packages used in commercial applications aimed at accelerating chemical process design and analysis, there are still certain tasks in design optimization, such as distillation column internals design, that become bottlenecks due to inherent limitations in such software packages. This work demonstrates the use of soft actor-critic (SAC) reinforcement learning (RL) in automating the task of determining the optimal design of trayed multistage distillation columns. The design environment was created using the AspenPlus® software (version 12, Aspen Technology Inc., Bedford, Massachusetts, USA) with its RadFrac module for the required rigorous modeling of the column internals. The RL computational work was achieved by developing a Python package that allows interfacing with AspenPlus® and by implementing in OpenAI’s Gymnasium module (version 1.0.0, OpenAI Inc., San Francisco, California, USA) the learning space for the state and action variables. The results evidently show that (1) SAC RL works as an automation approach for the design of distillation column internals, (2) the reward scheme in the SAC model significantly affects SAC performance, (3) column diameter is a significant constraint in achieving column internals design specifications in flooding, and (4) SAC hyperparameters have varying effects on SAC performance. SAC RL can be implemented as a one-shot learning model that can significantly improve the design of multistage distillation column internals by automating the optimization process.

Details

1009240
Business indexing term
Title
Soft Actor-Critic Reinforcement Learning Improves Distillation Column Internals Design Optimization
Author
Fortela Dhan Lord B. 1   VIAFID ORCID Logo  ; Broussard Holden 2 ; Ward, Renee 2 ; Broussard Carly 2 ; Mikolajczyk, Ashley P 1 ; Bayoumi, Magdy A 3 ; Zappi, Mark E 1   VIAFID ORCID Logo 

 Department of Chemical Engineering, University of Louisiana at Lafayette, Lafayette, LA 70504, USA; [email protected] (H.B.); [email protected] (R.W.); [email protected] (C.B.); [email protected] (A.P.M.); [email protected] (M.E.Z.), The Energy Institute of Louisiana, University of Louisiana at Lafayette, Lafayette, LA 70504, USA 
 Department of Chemical Engineering, University of Louisiana at Lafayette, Lafayette, LA 70504, USA; [email protected] (H.B.); [email protected] (R.W.); [email protected] (C.B.); [email protected] (A.P.M.); [email protected] (M.E.Z.) 
 Department of Electrical and Computer Engineering, University of Louisiana at Lafayette, Lafayette, LA 70504, USA; [email protected] 
Publication title
Volume
9
Issue
2
First page
34
Publication year
2025
Publication date
2025
Publisher
MDPI AG
Place of publication
Basel
Country of publication
Switzerland
e-ISSN
23057084
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2025-03-18
Milestone dates
2025-01-22 (Received); 2025-03-12 (Accepted)
Publication history
 
 
   First posting date
18 Mar 2025
ProQuest document ID
3194505763
Document URL
https://www.proquest.com/scholarly-journals/soft-actor-critic-reinforcement-learning-improves/docview/3194505763/se-2?accountid=208611
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2025-04-25
Database
ProQuest One Academic