new_modeling_toolkit.core.utils package

Submodules

new_modeling_toolkit.core.utils.core_utils module

cantor_pairing_function(a: int, b: int) β†’ int

Encodes two natural number into a single, unique natural number. Makes sure we have unique seeds for each combination of Monte Carlo Seed and Generator Seed

convert_to_bool(v)
filter_not_none(values: List) β†’ List

Filters the passed values for ones that are not None.

Parameters:

values – values to filter

Returns:

filtered values

Return type:

filtered_values

map_dict(func: Callable, dict_: Dict[Any, Any]) β†’ Dict[Any, Any]

Returns a copy of the dictionary with the function applied to all of its values.

Parameters:
  • dict – dictionary to apply the function to

  • func (function) – function to apply to the values of the dictionary

Returns:

dict with mapped values

Return type:

mapped_dict (dict)

map_not_none(func: Callable, values: List) β†’ List

Applies a function to each item in a list, skipping values in the list that are None

Parameters:
  • func – function to apply to list values

  • values – list of values to apply the function to

Returns:

list of values with the function applied

Return type:

mapped_values

profile_memory(function, *args, **kwargs)
profile_time(function, *args, **kwargs)
sum_not_none(values: Iterable) β†’ Any

Sums the values in an Iterable, ignoring those values which are None.

Parameters:

values – values to sum

Returns:

sum of values, without those that are None

timer(func)

Simple timer decorator

new_modeling_toolkit.core.utils.gurobi_utils module

class GurobiCredentials(cloud_access_id: str, secret_key: str, license_id: str, pool_id: str | None = None, license_path: PathLike | None = None)

Bases: object

Dataclass for storing Gurobi Instant Cloud credentials.

check_if_pool_exists()

Checks if the pool ID associated with these credentials exists.

Returns:

whether the pool exists or not

cloud_access_id: str
create_pool(machine_type: str | None = None, num_instances: int = 1, job_limit: int = 1, num_distributed_workers: int = 0)

Creates a new Gurobi Cloud pool using the Pool ID specified in self.pool_id.

Parameters:
  • machine_type – AWS EC2 instance type to create in the pool

  • num_instances – number of EC2 instances to create in the pool

  • job_limit – maximum number of jobs that can be run on a single instance (not the entire pool)

  • num_distributed_workers – max number of distributed workers on a single instance. Only relevant if a distributed algorithm is used.

Returns:

the pool ID of the created pool.

Raises:

ValueError – if the API request to create the pool is invalid

classmethod from_license_file(license_path: PathLike)

Create a set of credentials from a downloaded Gurobi Instant Cloud license.

Parameters:

license_path – path to the license file

Returns:

instantiated credentials

Return type:

instance

license_id: str
license_path: PathLike | None = None
pool_id: str | None = None
scale_pool(num_instances: int)
secret_key: str
start_pool(wait_time_seconds: int = 180) β†’ bool

Sends an API request to start the instances in the desired pool represented by these credentials.

Note that even if wait_time_seconds is exceeded and the method returns False, the pool should still eventually start up and may just be taking longer than expected. If, for example, you are running a large RESOLVE model that takes several minutes to compile, you could call this method with a short wait time and allow the pool to continue starting up while your model compiles. The default of three minutes should generally be long enough for the pool to start.

Parameters:

wait_time_seconds – how long to wait for the pool to start before exiting the function.

Returns:

whether the pool is ready and idle after the wait time has passed

Return type:

pool_ready

to_license_file(output_path: PathLike)

Creates a license file that represents this credentials instance.

This is currently used because Pyomo does not allow the user to specify which pool to use other than through a license file. In order to change what pool is being used, the Gurobi license environment variable must be modified to point to a license file that contains the desired Pool ID.

Parameters:

output_path – path to write the license file

set_license_file_environment_variable(path_to_license: PathLike)

new_modeling_toolkit.core.utils.pandas_utils module

compare_dataframes(*, previous: DataFrame, new: DataFrame, indices: list[str], column_to_compare: str)

A more flexible comparison method for two dataframes.

The existing .compare() and .equal() methods from pandas don’t quite fit our needs. This method allows you to compare any two dataframes by:

  • Concatenating the two DataFrames

  • Doing a string comparison of the β€œprevious” and β€œnew” column

  • Returning a DataFrame with any deltas

convert_index_levels_to_datetime(pandas_object: Series | DataFrame, levels: int | str | Sequence[int] | Sequence[str], **kwargs) β†’ Series | DataFrame

Converts one or levels in a pandas Series or DataFrame with a MultiIndex to datetime type.

Parameters:
  • pandas_object – series or data frame to convert the levels of

  • levels – level name(s) to convert (or integer positions, if unnamed)

  • **kwargs – additional arguments to pd.to_datetime() (e.g. format=”%d/%m/%Y %H:%M”)

Returns:

pandas object with the converted levels

Return type:

converted_object

reindex_by_intersection(pandas_objects: Iterable[Series | DataFrame]) β†’ List[Series | DataFrame]

Reindexes a list of frames using the set intersection of the row indices of all frames.

Parameters:

pandas_objects – list of series or data frames

Returns:

list of reindexed data frames

Return type:

reindexed_frames

reindex_by_union(pandas_objects: Iterable[Series | DataFrame]) β†’ List[Series | DataFrame]

Reindexes a list of frames using the set union of the row indices of all frames.

Parameters:

pandas_objects – list of series or data frames

Returns:

list of reindexed data frames

Return type:

reindexed_frames

new_modeling_toolkit.core.utils.parallelization_utils module

parallelize(func: Callable, args_list: Sequence[Sequence[Any]] | None = None, kwargs_list: Sequence[Dict[str, Any]] | None = None, num_processes: int | None = None, show_progress_bar: bool = True, progress_bar_description: str | None = None, debug: bool | None = False, backend: str | None = 'loky', temp_folder: str | None = None) β†’ List[Any]

Parallelizes calling the function with specified arguments using joblib as the backend.

Parameters:
  • func – function to be called

  • args_list – list of positional arguments for each function call

  • kwargs_list – list of keyword arguments for each function call

  • num_processes – number of parallel processes to use. Default is the detected number of CPUs

  • show_progress_bar – whether to show a progress bar

  • progress_bar_description – short title for the progress bar

Returns:

output of function for each of the arguments

Return type:

outputs (list)

new_modeling_toolkit.core.utils.pyomo_utils module

convert_pyomo_object_to_dataframe(model_component: Param | Var | Expression | Constraint, exception: bool = True, dual_only: bool = False, use_doc_as_column_name: bool = False) β†’ DataFrame

Converts an object from a pyomo model (Param, Var, Expression, or Constraint) into a pandas DataFrame.

If model_component is a Constraint, the lower bound, body, upper bound, and dual value will all be returned. Set dual_only=True to return only the dual value for the constraint.

Parameters:
  • model_component – the component to convert to a DataFrame

  • exception – Passthrough to pyomo.Value(). If True, raise an exception for uninitialized components. If False, return None for unintialized values.

  • dual_only – for a Constraint, whether to return only the dual values

  • use_doc_as_column_name – True if the column name should be what is defined in optional doc attribute, otherwise the column name will be returned as the name of the component

Returns:

the pyomo object in DataFrame format

Return type:

df

get_index_labels(model_component: Param | Var | Expression | Constraint) β†’ list[str]

Get the names of the indices, given a Pyomo model component instance. Unpack the tuple listed in β€œdoc” input of Set definition

mark_pyomo_component(func)

Simple timer decorator

new_modeling_toolkit.core.utils.string_utils module

convert_snake_to_camel_case(string)
convert_snake_to_lower_camel_case(string)

new_modeling_toolkit.core.utils.util module

class DirStructure

Bases: BaseModel

Directory and file structure of the model.

Fields:
field code_dir: Path = PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/e3-resolve/checkouts/latest/new_modeling_toolkit')
field data_folder: str = 'data'
field proj_dir: Path = None
field start_dir: Path | None = None
field tool_name: str = 'kit'
class Config

Bases: object

extra = 'allow'
copy(**kwargs) β†’ DirStructure

Returns a copy of the model.

!!! warning β€œDeprecated”

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

`py data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `

Parameters:
  • include – Optional set or mapping specifying which fields to include in the copied model.

  • exclude – Optional set or mapping specifying which fields to exclude in the copied model.

  • update – Optional dictionary of field-value pairs to override field values in the copied model.

  • deep – If True, the values of fields that are Pydantic models will be deep-copied.

Returns:

A copy of the model with included, excluded and updated fields as specified.

get_valid_results_dirs(model: str)

Creates a list of all non-empty results folders for the specified model

Parameters:

model – name of the model whose outputs to filter. Should be one of: [β€œresolve”, β€œreclaim”, β€œrecap”]

Returns:

make_directories()
make_pathways_dir(case_name, log_level: str = 'DEBUG')
make_recap2_dir(case_name)
make_recap_dir(case_name=None, log_level='DEBUG', skip_creating_results_folder=False)
make_reclaim_dir(reclaim_config_name)
make_resolve_dir(resolve_settings_name: str, timestamp: str = None)
class StreamToLogger(level='DEBUG')

Bases: object

Class to help loguru capture all print() from stdout.

The use-case for this in Pyomo is the tee=True feed from the solver. Because of this, the logging level is assumed to be DEBUG.

flush()
write(buffer)

Module contents