International Journal of Advance Computational Engineering and Networking (IJACEN)
.
Follow Us On :
current issues
Volume-12,Issue-1  ( Jan, 2024 )
Past issues
  1. Volume-11,Issue-12  ( Dec, 2023 )
  2. Volume-11,Issue-11  ( Nov, 2023 )
  3. Volume-11,Issue-10  ( Oct, 2023 )
  4. Volume-11,Issue-9  ( Sep, 2023 )
  5. Volume-11,Issue-8  ( Aug, 2023 )
  6. Volume-11,Issue-7  ( Jul, 2023 )
  7. Volume-11,Issue-6  ( Jun, 2023 )
  8. Volume-11,Issue-5  ( May, 2023 )
  9. Volume-11,Issue-4  ( Apr, 2023 )
  10. Volume-11,Issue-3  ( Mar, 2023 )

Statistics report
Apr. 2024
Submitted Papers : 80
Accepted Papers : 10
Rejected Papers : 70
Acc. Perc : 12%
Issue Published : 133
Paper Published : 1552
No. of Authors : 4025
  Journal Paper


Paper Title :
Opportunistic Routing In Cognitive Radio Networks Using Reinforcement Learning

Author :Jitisha R. Patel, Sunita S. Barve

Article Citation :Jitisha R. Patel ,Sunita S. Barve , (2014 ) " Opportunistic Routing In Cognitive Radio Networks Using Reinforcement Learning " , International Journal of Advance Computational Engineering and Networking (IJACEN) , pp. 1-3, Volume-2,Issue-8

Abstract : Cognitive radio (CR) technology is rapidly developing these days due to its capability of adaptive learning and reconfiguration. Thus, using Cognitive Radio Networks (CRNs) spectrum efficiency can be increased by allowing the secondary users (SUs) to access the licensed band dynamically and opportunistically without interfering the primary users (PUs). Daniel H. and Ryan W. Thomas, define the CRNs in the context of machine learning as the network which improves its performance through experience gained over a period of time without complete information about the environment in which it operates. Thus, the dynamism and opportunism can be learnt by reinforcement learning, which is concerned with how software agents or learning agents ought to take actions in an environment so as to maximize some notion of cumulative reward. The paper proposes a routing scheme that uses Q-learning, which is the most widely used RL approach in wireless networks. In Q-learning, the learnt action value or Q-value, Q (state, event, action) is updated using the reward and is recorded. For each state-event pair, an appropriate action is rewarded and its Q-value is increased. Hence, the Q-value indicates the appropriateness of an action selection in a state-event pair. At any time instant, an action is chosen by the agent in such a way that it maximizes its Q-value. The reward corresponds to performance metric such as throughput.

Type : Research paper

Published : Volume-2,Issue-8


DOIONLINE NO - IJACEN-IRAJ-DOIONLINE-1065   View Here

Copyright: © Institute of Research and Journals

| PDF |
Viewed - 60
| Published on 2014-08-01
   
   
IRAJ Other Journals
IJACEN updates
Paper Submission is open now for upcoming Issue.
The Conference World

JOURNAL SUPPORTED BY