Skip to content

Commit

Permalink
fix: improve error messages for resource limitations and WebSocket is…
Browse files Browse the repository at this point in the history
…sues
  • Loading branch information
FerTV committed Feb 7, 2025
1 parent 0c14f56 commit ac6adae
Show file tree
Hide file tree
Showing 25 changed files with 306 additions and 275 deletions.
62 changes: 31 additions & 31 deletions docs/_prebuilt/developerguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@ This section explains the structure of the frontend and provides instructions on
start_services.sh
```

The frontend is organized within the `frontend/` directory. Key files and folders include:
The frontend is organized within the `frontend/` directory. Key files and folders include:

- `config/` → Contains the **participant.json.example**, the default structure for the paramteres passed to each participant.
- `databases/` → Contains the different databases for NEBULA
- `static/` → Holds static assets (CSS, images, JS, etc.).
- `templates/` → Contains HTML templates. Focus on **deployment.html**
- `databases/` → Contains the different databases for NEBULA
- `static/` → Holds static assets (CSS, images, JS, etc.).
- `templates/` → Contains HTML templates. Focus on **deployment.html**

### **Adding a New Parameter**

Expand Down Expand Up @@ -119,7 +119,7 @@ To implement a new attack type, first locate the section where attacks are defin
</h5>
<div class="form-check form-check-inline" style="display: none;" id="new-parameter-container">
<input type="number" class="form-control" id="new-parameter-value"
placeholder="new parameter value" min="0" value="0">
placeholder="new parameter value" min="0" value="0">
</div>
</div>
</div>
Expand Down Expand Up @@ -204,43 +204,43 @@ To view the documentation of functions in more detail, you must go to the **NEBU
utils.py
```

The backend is organized within the `/nebula/` directory. Key files and folders include:
The backend is organized within the `/nebula/` directory. Key files and folders include:

**Addons/**

The `addons/` directory contains extended functionalities that can be integrated into the core system.

- **`attacks/`** → Simulates attacks, primarily for security purposes, including adversarial attacks in machine learning.
- **`blockchain/`** → Integrates blockchain technology, potentially for decentralized storage or security enhancements.
- **`trustworthiness/`** → Evaluates the trustworthiness and reliability of participants, focusing on security and ethical considerations.
- **`waf/`** → Implements a Web Application Firewall (WAF) to filter and monitor HTTP traffic for potential threats.
- **`attacks/`** → Simulates attacks, primarily for security purposes, including adversarial attacks in machine learning.
- **`blockchain/`** → Integrates blockchain technology, potentially for decentralized storage or security enhancements.
- **`trustworthiness/`** → Evaluates the trustworthiness and reliability of participants, focusing on security and ethical considerations.
- **`waf/`** → Implements a Web Application Firewall (WAF) to filter and monitor HTTP traffic for potential threats.

**Core/**
**Core/**

The `core/` directory contains the essential components for the backend operation.

- **`aggregation/`** → Manages the aggregation of data from different nodes.
- **`datasets/`** → Handles dataset management, including loading and preprocessing data.
- **`models/`** → Defines machine learning model architectures and related functionalities, such as training and evaluation.
- **`network/`** → Manages communication between participants in a distributed system.
- **`pb/`** → Implements Protocol Buffers (PB) for efficient data serialization and communication.
- **`training/`** → Contains the logic for model training, optimization, and evaluation.
- **`utils/`** → Provides utility functions for file handling, logging, and common tasks.
- **`aggregation/`** → Manages the aggregation of data from different nodes.
- **`datasets/`** → Handles dataset management, including loading and preprocessing data.
- **`models/`** → Defines machine learning model architectures and related functionalities, such as training and evaluation.
- **`network/`** → Manages communication between participants in a distributed system.
- **`pb/`** → Implements Protocol Buffers (PB) for efficient data serialization and communication.
- **`training/`** → Contains the logic for model training, optimization, and evaluation.
- **`utils/`** → Provides utility functions for file handling, logging, and common tasks.

**Files**
**Files**

- **`engine.py`** → The main engine orchestrating participant communications, training, and overall behavior.
- **`eventmanager.py`** → Handles event management, logging, and notifications within the system.
- **`role.py`** → Defines participant roles and their interactions.
- **`engine.py`** → The main engine orchestrating participant communications, training, and overall behavior.
- **`eventmanager.py`** → Handles event management, logging, and notifications within the system.
- **`role.py`** → Defines participant roles and their interactions.

**Standalone Scripts**
**Standalone Scripts**

These scripts act as entry points or controllers for various backend functionalities.

- **`controller.py`** → Manages the flow of operations, coordinating tasks and interactions.
- **`participant.py`** → Represents a participant in the decentralized network, handling computations and communication.
- **`scenarios.py`** → Defines different simulation scenarios for testing and running participants under specific conditions.
- **`utils.py`** → Contains helper functions that simplify development and maintenance.
- **`controller.py`** → Manages the flow of operations, coordinating tasks and interactions.
- **`participant.py`** → Represents a participant in the decentralized network, handling computations and communication.
- **`scenarios.py`** → Defines different simulation scenarios for testing and running participants under specific conditions.
- **`utils.py`** → Contains helper functions that simplify development and maintenance.


### **Adding new Datasets**
Expand Down Expand Up @@ -371,7 +371,7 @@ If you want to import a dataset, you must first create a folder named **data** w
# self._load_data(self.path_to_data)

mode = "train" if self.is_train else "test"
self.image_list = glob.glob(os.path.join(self.path_to_data, f"{self.name}/{mode}/*/*.npy"))
self.image_list = glob.glob(os.path.join(self.path_to_data, f"{self.name}/{mode}/*/*.npy"))
self.label_list = glob.glob(os.path.join(self.path_to_data, f"{self.name}/{mode}/*/*.json"))
self.image_list = sorted(self.image_list, key=os.path.basename)
self.label_list = sorted(self.label_list, key=os.path.basename)
Expand Down Expand Up @@ -424,7 +424,7 @@ Then you must create a **MilitarySARDataset** class in order to use it, as shown

#### Define transforms

You can apply transformations like cropping and normalization using `torchvision.transforms`.
You can apply transformations like cropping and normalization using `torchvision.transforms`.

For example, the **MilitarySAR** dataset uses **RandomCrop** for training and **CenterCrop** for testing.

Expand Down Expand Up @@ -483,7 +483,7 @@ For example, the **MilitarySAR** dataset uses **RandomCrop** for training and **
apply_transforms = [CenterCrop(88), transforms.ToTensor()]
if train:
apply_transforms = [RandomCrop(88), transforms.ToTensor()]

return MilitarySAR(name="soc", is_train=train, transform=transforms.Compose(apply_transforms))
```

Expand Down Expand Up @@ -816,4 +816,4 @@ The new aggregator must inherit from the **Aggregator** class. You can use **Fed

# self.print_model_size(accum)
return accum
```
```
6 changes: 3 additions & 3 deletions docs/_prebuilt/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,17 @@ Or clone the repository using git:

Now, you can move to the source directory:

<pre><code><span style="color: blue;">user@host</span>:~$ <span style="color: green;">cd nebula</span></code></pre>
<pre><code><span style="color: blue;">user@host</span>:~$ <span style="color: green;">cd nebula</span></code></pre>

### **Installing NEBULA**

Install required dependencies and set up Docker containers by running:

<pre><code><span style="color: blue;">user@host</span>:~$ <span style="color: green;">make install</span></code></pre>
<pre><code><span style="color: blue;">user@host</span>:~$ <span style="color: green;">make install</span></code></pre>

Next, activate the virtual environment:

<pre><code><span style="color: blue;">user@host</span>:~$ <span style="color: green;">source .venv/bin/activate</span></code></pre>
<pre><code><span style="color: blue;">user@host</span>:~$ <span style="color: green;">source .venv/bin/activate</span></code></pre>

If you forget this command, you can type:

Expand Down
2 changes: 1 addition & 1 deletion docs/_prebuilt/js/toc.js
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
document.addEventListener('DOMContentLoaded', function() {
if (window.location.pathname.includes("api")) {
document.querySelector('.md-sidebar--primary').style.display = 'block';
document.querySelector('.md-sidebar--primary').style.display = 'block';
}
});
Loading

0 comments on commit ac6adae

Please sign in to comment.