new_modeling_toolkit.core.utils package
Submodules
new_modeling_toolkit.core.utils.core_utils module
- cantor_pairing_function(a: int, b: int) int
Encodes two natural number into a single, unique natural number. Makes sure we have unique seeds for each combination of Monte Carlo Seed and Generator Seed
- filter_not_none(values: List) List
Filters the passed values for ones that are not None.
- Parameters:
values β values to filter
- Returns:
filtered values
- Return type:
filtered_values
- map_dict(func: Callable, dict_: Dict[Any, Any]) Dict[Any, Any]
Returns a copy of the dictionary with the function applied to all of its values.
- Parameters:
dict β dictionary to apply the function to
func (function) β function to apply to the values of the dictionary
- Returns:
dict with mapped values
- Return type:
mapped_dict (dict)
- map_not_none(func: Callable, values: List) List
Applies a function to each item in a list, skipping values in the list that are None
- Parameters:
func β function to apply to list values
values β list of values to apply the function to
- Returns:
list of values with the function applied
- Return type:
mapped_values
new_modeling_toolkit.core.utils.gurobi_utils module
- class GurobiCredentials(cloud_access_id: str, secret_key: str, license_id: str, pool_id: str | None = None, license_path: PathLike | None = None)
Bases:
object
Dataclass for storing Gurobi Instant Cloud credentials.
- check_if_pool_exists()
Checks if the pool ID associated with these credentials exists.
- Returns:
whether the pool exists or not
- create_pool(machine_type: str | None = None, num_instances: int = 1, job_limit: int = 1, num_distributed_workers: int = 0)
Creates a new Gurobi Cloud pool using the Pool ID specified in self.pool_id.
- Parameters:
machine_type β AWS EC2 instance type to create in the pool
num_instances β number of EC2 instances to create in the pool
job_limit β maximum number of jobs that can be run on a single instance (not the entire pool)
num_distributed_workers β max number of distributed workers on a single instance. Only relevant if a distributed algorithm is used.
- Returns:
the pool ID of the created pool.
- Raises:
ValueError β if the API request to create the pool is invalid
- classmethod from_license_file(license_path: PathLike)
Create a set of credentials from a downloaded Gurobi Instant Cloud license.
- Parameters:
license_path β path to the license file
- Returns:
instantiated credentials
- Return type:
instance
- start_pool(wait_time_seconds: int = 180) bool
Sends an API request to start the instances in the desired pool represented by these credentials.
Note that even if wait_time_seconds is exceeded and the method returns False, the pool should still eventually start up and may just be taking longer than expected. If, for example, you are running a large RESOLVE model that takes several minutes to compile, you could call this method with a short wait time and allow the pool to continue starting up while your model compiles. The default of three minutes should generally be long enough for the pool to start.
- Parameters:
wait_time_seconds β how long to wait for the pool to start before exiting the function.
- Returns:
whether the pool is ready and idle after the wait time has passed
- Return type:
pool_ready
- to_license_file(output_path: PathLike)
Creates a license file that represents this credentials instance.
This is currently used because Pyomo does not allow the user to specify which pool to use other than through a license file. In order to change what pool is being used, the Gurobi license environment variable must be modified to point to a license file that contains the desired Pool ID.
- Parameters:
output_path β path to write the license file
new_modeling_toolkit.core.utils.pandas_utils module
- compare_dataframes(*, previous: DataFrame, new: DataFrame, indices: list[str], column_to_compare: str)
A more flexible comparison method for two dataframes.
The existing .compare() and .equal() methods from pandas donβt quite fit our needs. This method allows you to compare any two dataframes by:
Concatenating the two DataFrames
Doing a string comparison of the βpreviousβ and βnewβ column
Returning a DataFrame with any deltas
- convert_index_levels_to_datetime(pandas_object: Series | DataFrame, levels: int | str | Sequence[int] | Sequence[str], **kwargs) Series | DataFrame
Converts one or levels in a pandas Series or DataFrame with a MultiIndex to datetime type.
- Parameters:
pandas_object β series or data frame to convert the levels of
levels β level name(s) to convert (or integer positions, if unnamed)
**kwargs β additional arguments to pd.to_datetime() (e.g. format=β%d/%m/%Y %H:%Mβ)
- Returns:
pandas object with the converted levels
- Return type:
converted_object
- reindex_by_intersection(pandas_objects: Iterable[Series | DataFrame]) List[Series | DataFrame]
Reindexes a list of frames using the set intersection of the row indices of all frames.
- Parameters:
pandas_objects β list of series or data frames
- Returns:
list of reindexed data frames
- Return type:
reindexed_frames
new_modeling_toolkit.core.utils.parallelization_utils module
- parallelize(func: Callable, args_list: Sequence[Sequence[Any]] | None = None, kwargs_list: Sequence[Dict[str, Any]] | None = None, num_processes: int | None = None, show_progress_bar: bool = True, progress_bar_description: str | None = None, debug: bool | None = False, backend: str | None = 'loky', temp_folder: str | None = None) List[Any]
Parallelizes calling the function with specified arguments using joblib as the backend.
- Parameters:
func β function to be called
args_list β list of positional arguments for each function call
kwargs_list β list of keyword arguments for each function call
num_processes β number of parallel processes to use. Default is the detected number of CPUs
show_progress_bar β whether to show a progress bar
progress_bar_description β short title for the progress bar
- Returns:
output of function for each of the arguments
- Return type:
outputs (list)
new_modeling_toolkit.core.utils.pyomo_utils module
- convert_pyomo_object_to_dataframe(model_component: Param | Var | Expression | Constraint, exception: bool = True, dual_only: bool = False, use_doc_as_column_name: bool = False) DataFrame
Converts an object from a pyomo model (Param, Var, Expression, or Constraint) into a pandas DataFrame.
If model_component is a Constraint, the lower bound, body, upper bound, and dual value will all be returned. Set dual_only=True to return only the dual value for the constraint.
- Parameters:
model_component β the component to convert to a DataFrame
exception β Passthrough to pyomo.Value(). If True, raise an exception for uninitialized components. If False, return None for unintialized values.
dual_only β for a Constraint, whether to return only the dual values
use_doc_as_column_name β True if the column name should be what is defined in optional doc attribute, otherwise the column name will be returned as the name of the component
- Returns:
the pyomo object in DataFrame format
- Return type:
df
new_modeling_toolkit.core.utils.string_utils module
new_modeling_toolkit.core.utils.util module
- class DirStructure
Bases:
BaseModel
Directory and file structure of the model.
- Fields:
- field code_dir: Path = PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/e3-resolve/checkouts/latest/new_modeling_toolkit')
- copy(**kwargs) DirStructure
Returns a copy of the model.
- !!! warning βDeprecatedβ
This method is now deprecated; use model_copy instead.
If you need include or exclude, use:
`py data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `
- Parameters:
include β Optional set or mapping specifying which fields to include in the copied model.
exclude β Optional set or mapping specifying which fields to exclude in the copied model.
update β Optional dictionary of field-value pairs to override field values in the copied model.
deep β If True, the values of fields that are Pydantic models will be deep-copied.
- Returns:
A copy of the model with included, excluded and updated fields as specified.