Yazar "Ata, Baris" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe An improved roosters algorithm for constrained 3D UAV path planning in urban environments(Nature Portfolio, 2025) Gencal, Mashar Cenk; Ata, Baris; Kurucan, Mehmet; Kilinc, EmreUrban environments impose complex challenges for the navigation of unmanned aerial vehicles (UAVs), including dense obstacles, no-fly zones, energy constraints, and regulatory restrictions. Addressing these challenges requires efficient and robust optimization techniques. This study introduces the Improved Roosters Algorithm (IRA), a novel metaheuristic inspired by the natural dominance behavior of roosters, tailored for constrained 3D UAV path planning in urban scenarios. Unlike existing metaheuristics, IRA introduces a spiral dancing operator, adaptive constraint handling, and a hierarchical population structure. These innovations directly target the lack of adaptive mechanisms in constraint-rich urban environments, enabling more reliable and realistic UAV path planning. The performance of IRA is benchmarked against Particle Swarm Optimization (PSO), Standard Genetic Algorithm (SGA), Differential Evolution (DE), Grey Wolf Optimizer (GWO) and the original Roosters Algorithm (RA) across three increasingly complex simulation scenarios. Experimental results demonstrate that IRA consistently outperforms the baseline methods in terms of feasibility and optimality, validating its potential as a competitive tool for UAV mission planning in realistic urban environments.Öğe Benchmarking Large Language Model Reasoning in Indoor Robot Navigation(IEEE, 2025) Balci, Emirhan; Sarigul, Mehmet; Ata, BarisThis study evaluates the performance of state-of-the-art text-based generative large language models in indoor robot navigation planning, focusing on object, spatial, and common-sense reasoning-centric instructions. Three scenes from the Matterport3D dataset were selected, along with corresponding instruction sequences and routes. Object-labeled semantic maps were generated using the RGB-D images and camera poses of the scenes. The instructions were provided to the models, and the generated robot codes were executed on a mobile robot within the selected scenes. The routes followed by the robot, which detected objects through the semantic map, were recorded. The findings indicate that while the models successfully executed object and spatial-based instructions, some models struggled with those requiring common-sense reasoning. This study aims to contribute to robotics research by providing insights into the navigation planning capabilities of language models.









