Warum wirft mir das CityLearn -Tutorial weiterhin einen ValueError?Python

Python-Programme
Anonymous
 Warum wirft mir das CityLearn -Tutorial weiterhin einen ValueError?

Post by Anonymous »

Ich folge diesem CityLearn -Tutorial. Wenn ich jedoch den zweiten Teil implementiere (Q-Learning, buchstäblich von der Site kopieren und einfügen), erhalte ich immer wieder ein < /p>

valueError: Nicht genug Werte zum Auspacken (erwartet 2, Got 1).

Code: Select all

# ----------------- CALCULATE NUMBER OF TRAINING EPISODES -----------------
i = 3
m = tql_env.observation_space[0].n
n = tql_env.action_space[0].n
t = tql_env.time_steps - 1
tql_episodes = m*n*i/t
tql_episodes = int(tql_episodes)
print('Q-Table dimension:', (m, n))
print('Number of episodes to train:', tql_episodes)

# ------------------------------- SET LOADER ------------------------------
loader = get_loader(max=tql_episodes*t)
display(loader)

# ----------------------- SET MODEL HYPERPARAMETERS -----------------------
tql_kwargs = {
'epsilon': 1.0,
'minimum_epsilon': 0.01,
'epsilon_decay': 0.0001,
'learning_rate': 0.005,
'discount_factor': 0.99,
}

# ----------------------- INITIALIZE AND TRAIN MODEL ----------------------
tql_model = CustomTabularQLearning(
env=tql_env,
loader=loader,
random_seed=RANDOM_SEED,
**tql_kwargs
)
_ = tql_model.learn(episodes=tql_episodes)
< /code>
Und die Ausgabe sieht so aus: < /p>
Q-Table dimension: (24, 1728)
Number of episodes to train: 745

Simulating:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_35752\2723239734.py in 
29     **tql_kwargs
30 )
---> 31 _ = tql_model.learn(episodes=tql_episodes)

~\AppData\Local\anaconda3\envs\citylearn1.8\lib\site-packages\citylearn\agents\base.py in learn(self, episodes, keep_env_history, env_history_directory, deterministic, deterministic_finish, logging_level)
136         for episode in range(episodes):
137             deterministic = deterministic or (deterministic_finish and episode >= episodes - 1)
--> 138             observations = self.env.reset()
139
140             while not self.env.done:

~\AppData\Local\anaconda3\envs\citylearn1.8\lib\site-packages\gym\core.py in reset(self, **kwargs)
321     def reset(self, **kwargs) -> Tuple[ObsType, dict]:
322         """Resets the environment with kwargs."""
--> 323         return self.env.reset(**kwargs)
324
325     def render(

~\AppData\Local\anaconda3\envs\citylearn1.8\lib\site-packages\gym\core.py in reset(self, **kwargs)
321     def reset(self, **kwargs) -> Tuple[ObsType, dict]:
322         """Resets the environment with kwargs."""
--> 323         return self.env.reset(**kwargs)
324
325     def render(

~\AppData\Local\anaconda3\envs\citylearn1.8\lib\site-packages\gym\core.py in reset(self, **kwargs)
321     def reset(self, **kwargs) -> Tuple[ObsType, dict]:
322         """Resets the environment with kwargs."""
--> 323         return self.env.reset(**kwargs)
324
325     def render(

~\AppData\Local\anaconda3\envs\citylearn1.8\lib\site-packages\gym\core.py in reset(self, **kwargs)
377     def reset(self, **kwargs):
378         """Resets the environment, returning a modified observation using :meth:`self.observation`."""
--> 379         obs, info = self.env.reset(**kwargs)
380         return self.observation(obs), info
381

~\AppData\Local\anaconda3\envs\citylearn1.8\lib\site-packages\gym\core.py in reset(self, **kwargs)
377     def reset(self, **kwargs):
378         """Resets the environment, returning a modified observation using :meth:`self.observation`."""
--> 379         obs, info = self.env.reset(**kwargs)
380         return self.observation(obs), info
381

ValueError: not enough values to unpack (expected 2, got 1)
Ich verstehe wirklich nicht, was möglicherweise schief gehen könnte.>

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post