Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/497755
Full metadata record
DC FieldValueLanguage
dc.coverage.spatial
dc.date.accessioned2023-07-07T11:31:50Z-
dc.date.available2023-07-07T11:31:50Z-
dc.identifier.urihttp://hdl.handle.net/10603/497755-
dc.description.abstractIn this thesis we consider stochastic control problems with probability and risk-sensitive criterion. We consider both single and multi controller problems. Under probability criterion we first consider a zero-sum game with semi-Markov state process. We consider a general state and finite action spaces. Under suitable assumptions, we establish the existence of value of the game and also characterize it through an optimality equation. In the process we also prescribe a saddle point equilibrium. Next we newlineconsider a zero-sum game with probability criterion for continuous time Markov chains. We consider denumerable state space and unbounded transition rates. Again under suitable assumptions, we show the existence of value of the game and also characterize it as the unique solution of a pair of Shapley equations. We also establish the existence of a randomized stationary saddle point equilibrium. newline newlineIn the risk-sensitive setup we consider a single controller problem with semi-Markov state process. The state space is assumed to be discrete. In place of the classical risk-sensitive utility function, which is the exponential function, we consider general utility functions. The optimization criteria also contains a discount factor. We investigate random finite horizon and infinite horizon problems. Using a state augmentation technique we characterize the value functions and also prescribe optimal controls. We then consider risk-sensitive game problems. We study zero and non-zero sum risk-sensitive average criterion games for semi-Markov processes with a finite state space. For the zero-sum case, under suitable assumptions we show that the game has a value. We also establish the existence of a stationary saddle point equilibrium. For the non-zero sum case, under suitable assumptions we establish the existence of a stationary Nash equilibrium. newline newlineFinally, we also consider a partially observable model. More specifically, we investigate partially observable zero sum games where the state process is a discrete time Markov chain.
dc.format.extent
dc.languageEnglish
dc.relation
dc.rightsself
dc.titleStochastic Control Problems with Probability and Risk sensitive Criteria
dc.title.alternative
dc.creator.researcherBhabak, Arnab
dc.subject.keywordMathematics
dc.subject.keywordPhysical Sciences
dc.description.note
dc.contributor.guideSaha, Subhamay
dc.publisher.placeGuwahati
dc.publisher.universityIndian Institute of Technology Guwahati
dc.publisher.institutionDEPARTMENT OF MATHEMATICS
dc.date.registered2018
dc.date.completed2023
dc.date.awarded2023
dc.format.dimensions
dc.format.accompanyingmaterialNone
dc.source.universityUniversity
dc.type.degreePh.D.
Appears in Departments:DEPARTMENT OF MATHEMATICS

Files in This Item:
File Description SizeFormat 
01_fulltext.pdfAttached File993.11 kBAdobe PDFView/Open
04_abstract.pdf78.83 kBAdobe PDFView/Open
80_recommendation.pdf183.66 kBAdobe PDFView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: