Nicole Nellen, M.Sc.
Adresse
Technische Universität Hamburg
Institut für Maritime Logistik
Am Schwarzenberg-Campus 4 (D)
21073 Hamburg
Kontaktdaten
Büro: Gebäude D Raum 5.005
Tel.: +49 40 42878 6136
E-Mail: nicole.nellen(at)tuhh(dot)de
ORCiD: 0000-0002-3911-1811
Forschungsschwerpunkte
- Hafeninterne Transporte und Hinterlandtransporte
- Binnen- und Seehafen-Containerterminals
- Ereignisorientierte Simulation
- Simulationsgestützte Ablaufoptimierung von KV-Terminals
Veröffentlichungen (Auszug)
2023
[182365] |
Title: Reinforcement Learning at Container Terminals: A Literature Classification. <em>Advances in Resilient and Sustainable Transport. ICPLT 2023. Lecture Notes in Logistics</em> |
Written by: Grafelmann, Michaela and Nellen, Nicole and Jahn, Carlos |
in: (2023). |
Volume: Number: |
on pages: 147–159 |
Chapter: |
Editor: In Clausen, U. and Dellbrügge, M. (Eds.) |
Publisher: Springer, Cham: |
Series: |
Address: |
Edition: |
ISBN: |
how published: |
Organization: |
School: |
Institution: |
Type: |
DOI: 10.1007/978-3-031-28236-2_10 |
URL: |
ARXIVID: |
PMID: |
Note:
Abstract: Seaport container terminals serve a crucial role in global supply chains. They must be capable of handling ever-larger ships in less time at competitive prices. As a response, terminals are seeking new approaches to optimize operational decisions to improve their container throughput and operational efficiency. The use of artificial intelligence (AI) methods keeps promising great potential for solving diverse and complex application cases in logistics planning. Especially reinforcement learning (RL) methods are increasingly being explored as machine learning modules no longer strictly follow a specific goal, but programmed agents act and self-optimize in a virtual training environment. A comprehensive review, classification, and discussion of relevant literature on RL at container terminals for operational decision problems is the subject of this paper. Thereby, the feasibility of RL approaches is shown, but also the hurdles and current limitations
2022
[182365] |
Title: Reinforcement Learning at Container Terminals: A Literature Classification. <em>Advances in Resilient and Sustainable Transport. ICPLT 2023. Lecture Notes in Logistics</em> |
Written by: Grafelmann, Michaela and Nellen, Nicole and Jahn, Carlos |
in: (2023). |
Volume: Number: |
on pages: 147–159 |
Chapter: |
Editor: In Clausen, U. and Dellbrügge, M. (Eds.) |
Publisher: Springer, Cham: |
Series: |
Address: |
Edition: |
ISBN: |
how published: |
Organization: |
School: |
Institution: |
Type: |
DOI: 10.1007/978-3-031-28236-2_10 |
URL: |
ARXIVID: |
PMID: |
Note:
Abstract: Seaport container terminals serve a crucial role in global supply chains. They must be capable of handling ever-larger ships in less time at competitive prices. As a response, terminals are seeking new approaches to optimize operational decisions to improve their container throughput and operational efficiency. The use of artificial intelligence (AI) methods keeps promising great potential for solving diverse and complex application cases in logistics planning. Especially reinforcement learning (RL) methods are increasingly being explored as machine learning modules no longer strictly follow a specific goal, but programmed agents act and self-optimize in a virtual training environment. A comprehensive review, classification, and discussion of relevant literature on RL at container terminals for operational decision problems is the subject of this paper. Thereby, the feasibility of RL approaches is shown, but also the hurdles and current limitations
2021
[182365] |
Title: Reinforcement Learning at Container Terminals: A Literature Classification. <em>Advances in Resilient and Sustainable Transport. ICPLT 2023. Lecture Notes in Logistics</em> |
Written by: Grafelmann, Michaela and Nellen, Nicole and Jahn, Carlos |
in: (2023). |
Volume: Number: |
on pages: 147–159 |
Chapter: |
Editor: In Clausen, U. and Dellbrügge, M. (Eds.) |
Publisher: Springer, Cham: |
Series: |
Address: |
Edition: |
ISBN: |
how published: |
Organization: |
School: |
Institution: |
Type: |
DOI: 10.1007/978-3-031-28236-2_10 |
URL: |
ARXIVID: |
PMID: |
Note:
Abstract: Seaport container terminals serve a crucial role in global supply chains. They must be capable of handling ever-larger ships in less time at competitive prices. As a response, terminals are seeking new approaches to optimize operational decisions to improve their container throughput and operational efficiency. The use of artificial intelligence (AI) methods keeps promising great potential for solving diverse and complex application cases in logistics planning. Especially reinforcement learning (RL) methods are increasingly being explored as machine learning modules no longer strictly follow a specific goal, but programmed agents act and self-optimize in a virtual training environment. A comprehensive review, classification, and discussion of relevant literature on RL at container terminals for operational decision problems is the subject of this paper. Thereby, the feasibility of RL approaches is shown, but also the hurdles and current limitations
2020
[182365] |
Title: Reinforcement Learning at Container Terminals: A Literature Classification. <em>Advances in Resilient and Sustainable Transport. ICPLT 2023. Lecture Notes in Logistics</em> |
Written by: Grafelmann, Michaela and Nellen, Nicole and Jahn, Carlos |
in: (2023). |
Volume: Number: |
on pages: 147–159 |
Chapter: |
Editor: In Clausen, U. and Dellbrügge, M. (Eds.) |
Publisher: Springer, Cham: |
Series: |
Address: |
Edition: |
ISBN: |
how published: |
Organization: |
School: |
Institution: |
Type: |
DOI: 10.1007/978-3-031-28236-2_10 |
URL: |
ARXIVID: |
PMID: |
Note:
Abstract: Seaport container terminals serve a crucial role in global supply chains. They must be capable of handling ever-larger ships in less time at competitive prices. As a response, terminals are seeking new approaches to optimize operational decisions to improve their container throughput and operational efficiency. The use of artificial intelligence (AI) methods keeps promising great potential for solving diverse and complex application cases in logistics planning. Especially reinforcement learning (RL) methods are increasingly being explored as machine learning modules no longer strictly follow a specific goal, but programmed agents act and self-optimize in a virtual training environment. A comprehensive review, classification, and discussion of relevant literature on RL at container terminals for operational decision problems is the subject of this paper. Thereby, the feasibility of RL approaches is shown, but also the hurdles and current limitations
2019
[182365] |
Title: Reinforcement Learning at Container Terminals: A Literature Classification. <em>Advances in Resilient and Sustainable Transport. ICPLT 2023. Lecture Notes in Logistics</em> |
Written by: Grafelmann, Michaela and Nellen, Nicole and Jahn, Carlos |
in: (2023). |
Volume: Number: |
on pages: 147–159 |
Chapter: |
Editor: In Clausen, U. and Dellbrügge, M. (Eds.) |
Publisher: Springer, Cham: |
Series: |
Address: |
Edition: |
ISBN: |
how published: |
Organization: |
School: |
Institution: |
Type: |
DOI: 10.1007/978-3-031-28236-2_10 |
URL: |
ARXIVID: |
PMID: |
Note:
Abstract: Seaport container terminals serve a crucial role in global supply chains. They must be capable of handling ever-larger ships in less time at competitive prices. As a response, terminals are seeking new approaches to optimize operational decisions to improve their container throughput and operational efficiency. The use of artificial intelligence (AI) methods keeps promising great potential for solving diverse and complex application cases in logistics planning. Especially reinforcement learning (RL) methods are increasingly being explored as machine learning modules no longer strictly follow a specific goal, but programmed agents act and self-optimize in a virtual training environment. A comprehensive review, classification, and discussion of relevant literature on RL at container terminals for operational decision problems is the subject of this paper. Thereby, the feasibility of RL approaches is shown, but also the hurdles and current limitations