Some weights of the model checkpoint at entelecheia/ekonelectra-base-discriminator were not used when initializing ElectraForSequenceClassification: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.bias', 'discriminator_predictions.dense.weight']
- This IS expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ElectraForSequenceClassification were not initialized from the model checkpoint at entelecheia/ekonelectra-base-discriminator and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Token indices sequence length is longer than the specified maximum sequence length for this model (533 > 512). Running this sequence through the model will result in indexing errors
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
wandb: Currently logged in as: entelecheia. Use `wandb login --relogin` to force relogin
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071028-3bx0elnp
Finishing last run (ID:3bx0elnp) before initializing another...
Waiting for W&B process to finish... (success).
Run history:
| Training loss | █▇▆▅▃▂▁▁▄ |
| acc | ▁█ |
| eval_loss | █▁ |
| global_step | ▁▂▃▄▄▄▅▆▇██ |
| lr | █▇▆▅▄▄▃▂▁ |
| mcc | ▁█ |
| train_loss | █▁ |
Run summary:
| Training loss | 1.03131 |
| acc | 0.76427 |
| eval_loss | 0.80164 |
| global_step | 456 |
| lr | 0.0 |
| mcc | 0.72096 |
| train_loss | 0.71639 |
Synced
scarlet-wind-89:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/3bx0elnpSynced 4 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071028-3bx0elnp/logs
Successfully finished last run (ID:3bx0elnp). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071142-1r3nk1yw
Token indices sequence length is longer than the specified maximum sequence length for this model (691 > 512). Running this sequence through the model will result in indexing errors
Some weights of the model checkpoint at entelecheia/ekonelectra-base-discriminator were not used when initializing ElectraForSequenceClassification: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.bias', 'discriminator_predictions.dense.weight']
- This IS expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ElectraForSequenceClassification were not initialized from the model checkpoint at entelecheia/ekonelectra-base-discriminator and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Token indices sequence length is longer than the specified maximum sequence length for this model (533 > 512). Running this sequence through the model will result in indexing errors
Finishing last run (ID:1r3nk1yw) before initializing another...
Waiting for W&B process to finish... (success).
Synced
fiery-plasma-90:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/1r3nk1ywSynced 5 W&B file(s), 1 media file(s), 1 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071142-1r3nk1yw/logs
Successfully finished last run (ID:1r3nk1yw). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071214-kpbs288v
Finishing last run (ID:kpbs288v) before initializing another...
Waiting for W&B process to finish... (success).
Run history:
| Training loss | █▆▄▅▄▂▂▄▁ |
| acc | ▁█ |
| eval_loss | █▁ |
| global_step | ▁▂▃▄▄▄▅▆▇██ |
| lr | █▇▆▅▄▄▃▂▁ |
| mcc | ▁█ |
| train_loss | ▁█ |
Run summary:
| Training loss | 0.47778 |
| acc | 0.77445 |
| eval_loss | 0.7631 |
| global_step | 456 |
| lr | 0.0 |
| mcc | 0.73189 |
| train_loss | 0.85427 |
Synced
sweet-wave-91:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/kpbs288vSynced 4 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071214-kpbs288v/logs
Successfully finished last run (ID:kpbs288v). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071333-w3krkvu5
Some weights of the model checkpoint at entelecheia/ekonelectra-base-discriminator were not used when initializing ElectraForSequenceClassification: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.bias', 'discriminator_predictions.dense.weight']
- This IS expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ElectraForSequenceClassification were not initialized from the model checkpoint at entelecheia/ekonelectra-base-discriminator and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Token indices sequence length is longer than the specified maximum sequence length for this model (590 > 512). Running this sequence through the model will result in indexing errors
Finishing last run (ID:w3krkvu5) before initializing another...
Waiting for W&B process to finish... (success).
Synced
icy-serenity-92:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/w3krkvu5Synced 5 W&B file(s), 1 media file(s), 1 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071333-w3krkvu5/logs
Successfully finished last run (ID:w3krkvu5). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071403-2vv0hewt
Finishing last run (ID:2vv0hewt) before initializing another...
Waiting for W&B process to finish... (success).
Run history:
| Training loss | █▅▅▂▃▂▃▁▁ |
| acc | ▁█ |
| eval_loss | █▁ |
| global_step | ▁▂▃▄▄▄▅▆▇██ |
| lr | █▇▆▅▄▄▃▂▁ |
| mcc | ▁█ |
| train_loss | ▁█ |
Run summary:
| Training loss | 0.67537 |
| acc | 0.76936 |
| eval_loss | 0.79864 |
| global_step | 456 |
| lr | 0.0 |
| mcc | 0.72731 |
| train_loss | 0.44434 |
Synced
northern-dream-93:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/2vv0hewtSynced 4 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071403-2vv0hewt/logs
Successfully finished last run (ID:2vv0hewt). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071523-35lxag5u
Some weights of the model checkpoint at entelecheia/ekonelectra-base-discriminator were not used when initializing ElectraForSequenceClassification: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.bias', 'discriminator_predictions.dense.weight']
- This IS expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ElectraForSequenceClassification were not initialized from the model checkpoint at entelecheia/ekonelectra-base-discriminator and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Token indices sequence length is longer than the specified maximum sequence length for this model (731 > 512). Running this sequence through the model will result in indexing errors
Finishing last run (ID:35lxag5u) before initializing another...
Waiting for W&B process to finish... (success).
Synced
iconic-sea-94:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/35lxag5uSynced 5 W&B file(s), 1 media file(s), 1 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071523-35lxag5u/logs
Successfully finished last run (ID:35lxag5u). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071554-2e7jflgg
Finishing last run (ID:2e7jflgg) before initializing another...
Waiting for W&B process to finish... (success).
Run history:
| Training loss | █▆▆▃▃▃▂▁▃ |
| acc | ▁█ |
| eval_loss | █▁ |
| global_step | ▁▂▃▄▄▄▅▆▇██ |
| lr | █▇▆▅▄▄▃▂▁ |
| mcc | ▁█ |
| train_loss | █▁ |
Run summary:
| Training loss | 1.06689 |
| acc | 0.77671 |
| eval_loss | 0.78679 |
| global_step | 456 |
| lr | 0.0 |
| mcc | 0.73375 |
| train_loss | 0.55624 |
Synced
winter-haze-95:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/2e7jflggSynced 4 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071554-2e7jflgg/logs
Successfully finished last run (ID:2e7jflgg). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071712-a7mqtfdv
Some weights of the model checkpoint at entelecheia/ekonelectra-base-discriminator were not used when initializing ElectraForSequenceClassification: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.bias', 'discriminator_predictions.dense.weight']
- This IS expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ElectraForSequenceClassification were not initialized from the model checkpoint at entelecheia/ekonelectra-base-discriminator and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Token indices sequence length is longer than the specified maximum sequence length for this model (731 > 512). Running this sequence through the model will result in indexing errors
Finishing last run (ID:a7mqtfdv) before initializing another...
Waiting for W&B process to finish... (success).
Synced
trim-shadow-96:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/a7mqtfdvSynced 5 W&B file(s), 1 media file(s), 1 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071712-a7mqtfdv/logs
Successfully finished last run (ID:a7mqtfdv). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071742-1sxdb0fs
Finishing last run (ID:1sxdb0fs) before initializing another...
Waiting for W&B process to finish... (success).
Run history:
| Training loss | █▅▆▅▂▁▃▁▂ |
| acc | ▁█ |
| eval_loss | █▁ |
| global_step | ▁▂▃▄▄▄▅▆▇██ |
| lr | █▇▆▅▄▄▃▂▁ |
| mcc | ▁█ |
| train_loss | ▁█ |
Run summary:
| Training loss | 0.89486 |
| acc | 0.7671 |
| eval_loss | 0.7825 |
| global_step | 456 |
| lr | 0.0 |
| mcc | 0.72317 |
| train_loss | 0.5121 |
Synced
super-plasma-97:
https://wandb.ai/entelecheia/ekorpkit-book-esg_topics/runs/1sxdb0fsSynced 4 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
Find logs at: /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071742-1sxdb0fs/logs
Successfully finished last run (ID:1sxdb0fs). Initializing new run:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Tracking run with wandb version 0.13.2
Run data is saved locally in /workspace/projects/ekorpkit-book/outputs/esg_topics/ekonelectra-base/wandb/run-20220906_071900-2obg8t98