52 Commits

Author SHA1 Message Date
Thomas Polasek
055ab3a2e3 Convert directory fbcode/vision to use the Ruff Formatter
Summary:
Converts the directory specified to use the Ruff formatter in pyfmt

ruff_dog

If this diff causes merge conflicts when rebasing, please run
`hg status -n -0 --change . -I '**/*.{py,pyi}' | xargs -0 arc pyfmt`
on your diff, and amend any changes before rebasing onto latest.
That should help reduce or eliminate any merge conflicts.

allow-large-files

Reviewed By: bottler

Differential Revision: D66472063

fbshipit-source-id: 35841cb397e4f8e066e2159550d2f56b403b1bef
2024-11-26 02:38:20 -08:00
Conner Nilsen
a27755db41 Pyre Configurationless migration for] [batch:85/112] [shard:6/N]
Reviewed By: inseokhwang

Differential Revision: D54438157

fbshipit-source-id: a6acfe146ed29fff82123b5e458906d4b4cee6a2
2024-03-04 18:30:37 -08:00
Emilien Garreau
5910d81b7b Add blurpool following MIPNerf paper.
Summary:
Add blurpool has defined in [MIP-NeRF](https://arxiv.org/abs/2103.13415).
It has been added has an option for RayPointRefiner.

Reviewed By: shapovalov

Differential Revision: D46356189

fbshipit-source-id: ad841bad86d2b591a68e1cb885d4f781cf26c111
2023-07-06 02:20:53 -07:00
Emilien Garreau
ccf860f1db Add integrated position encoding based on MIPNerf implementation.
Summary: Add a new implicit module Integral Position Encoding based on [MIP-NeRF](https://arxiv.org/abs/2103.13415).

Reviewed By: shapovalov

Differential Revision: D46352730

fbshipit-source-id: c6a56134c975d80052b3a11f5e92fd7d95cbff1e
2023-07-06 02:20:53 -07:00
Emilien Garreau
29b8ebd802 Add utils to approximate the conical frustums as multivariate gaussians.
Summary:
Introduce methods to approximate the radii of conical frustums along rays as described in [MipNerf](https://arxiv.org/abs/2103.13415):

- Two new attributes are added to ImplicitronRayBundle: bins and radii. Bins is of size n_pts_per_ray + 1. It allows us to manipulate easily and n_pts_per_ray intervals. For example we need the intervals coordinates in the radii computation for \(t_{\mu}, t_{\delta}\). Radii are used to store the radii of the conical frustums.

- Add 3 new methods to compute the radii:
   - approximate_conical_frustum_as_gaussians: It computes the mean along the ray direction, the variance of the
      conical frustum  with respect to t and variance of the conical frustum with respect to its radius. This
      implementation follows the stable computation defined in the paper.
   - compute_3d_diagonal_covariance_gaussian: Will leverage the two previously computed variances to find the
     diagonal covariance of the Gaussian.
   - conical_frustum_to_gaussian: Mix everything together to compute the means and the diagonal covariances along
     the ray of the Gaussians.

- In AbstractMaskRaySampler, introduces the attribute `cast_ray_bundle_as_cone`. If False it won't change the previous behaviour of the RaySampler. However if True, the samplers will sample `n_pts_per_ray +1` instead of `n_pts_per_ray`. This points are then used to set the bins attribute of ImplicitronRayBundle. The support of HeterogeneousRayBundle has not been added since the current code does not allow it. A safeguard has been added to avoid a silent bug in the future.

Reviewed By: shapovalov

Differential Revision: D45269190

fbshipit-source-id: bf22fad12d71d55392f054e3f680013aa0d59b78
2023-07-06 01:55:41 -07:00
Roman Shapovalov
cd5db076d5 Adding SQL dataset classes to ImplicitronDataSource imports
Summary: Making it easier for the clients to use these datasets.

Reviewed By: bottler

Differential Revision: D46727179

fbshipit-source-id: cf619aee4c4c0222a74b30ea590cf37f08f014cc
2023-06-14 10:51:47 -07:00
Roman Shapovalov
b0462598ac Refactor: FrameDataBuilder is more extensible.
Summary:
This is mostly a refactoring diff to reduce friction in extending the frame data.

Slight functional changes: dataset getitem now accepts (seq_name, frame_number_as_singleton_tensor) as a non-advertised feature. Otherwise this code crashes:
```
item = dataset[0]
dataset[item.sequence_name, item.frame_number]
```

Reviewed By: bottler

Differential Revision: D45780175

fbshipit-source-id: 75b8e8d3dabed954a804310abdbd8ab44a8dea29
2023-05-17 10:38:34 -07:00
Emilien Garreau
813e941de5 Add the OverfitModel
Summary:
Introduces the OverfitModel for NeRF-style training with overfitting to one scene.
It is a specific case of GenericModel. It has been disentangle to ease usage.

## General modification

1. Modularize a minimum GenericModel to introduce OverfitModel
2. Introduce OverfitModel and ensure through unit testing that it behaves like GenericModel.

## Modularization

The following methods have been extracted from GenericModel to allow modularity with ManyViewModel:
- get_objective is now a call to weighted_sum_losses
- log_loss_weights
- prepare_inputs

The generic methods have been moved to an utils.py file.

Simplify the code to introduce OverfitModel.

Private methods like chunk_generator are now public and can now be used by ManyViewModel.

Reviewed By: shapovalov

Differential Revision: D43771992

fbshipit-source-id: 6102aeb21c7fdd56aa2ff9cd1dd23fd9fbf26315
2023-03-24 07:27:39 -07:00
Jeremy Reizenstein
b7e3b7b16c rendered_mesh_dataset improvements
Summary: Allow choosing the device and the distance

Reviewed By: shapovalov

Differential Revision: D42451605

fbshipit-source-id: 214f02d09da94eb127b3cc308d5bae800dc7b9e2
2023-01-16 07:46:41 -08:00
Jeremy Reizenstein
7be49bf46f allow dots in param_groups
Summary:
Allow a module's param_group member to specify overrides to the param groups of its members or their members.
Also logging for param group assignments.

This allows defining `params.basis_matrix` in the param_groups of a voxel_grid.

Reviewed By: shapovalov

Differential Revision: D41080667

fbshipit-source-id: 49f3b0e5b36e496f78701db0699cbb8a7e20c51e
2022-11-07 06:41:40 -08:00
David Novotny
e4a3298149 CO3Dv2 multi-category extension
Summary:
Allows loading of multiple categories.
Multiple categories are provided in a comma-separated list of category names.

Reviewed By: bottler, shapovalov

Differential Revision: D40803297

fbshipit-source-id: 863938be3aa6ffefe9e563aede4a2e9e66aeeaa8
2022-11-02 13:55:25 -07:00
David Novotny
bea84a6fcd voxel_grid_if -> remove union type from the settings.
Summary: see title

Reviewed By: shapovalov

Differential Revision: D40803670

fbshipit-source-id: 211189167837af577d6502a698e2f3fb3aec3e30
2022-10-31 11:37:43 -07:00
Jeremy Reizenstein
ca58863347 yaml test fix
Summary: Yaml bool case fix

Reviewed By: shapovalov

Differential Revision: D40623031

fbshipit-source-id: 29b2fba171c2cbebfa03834e38b614d07275c997
2022-10-23 07:09:26 -07:00
Jeremy Reizenstein
74754bbf17 voxel_grid_implicit_function
Reviewed By: shapovalov

Differential Revision: D40622304

fbshipit-source-id: 277515a55c46d9b8300058b439526539a7fe00a0
2022-10-23 05:36:34 -07:00
Jeremy Reizenstein
611aba9a20 replicate_last_interval in raymarcher
Summary: Add option to flat pad the last delta. Might to help when training on rgb only.

Reviewed By: shapovalov

Differential Revision: D40587475

fbshipit-source-id: c763fa38948600ea532c730538dc4ff29d2c3e0a
2022-10-23 02:47:09 -07:00
Jeremy Reizenstein
fe5bdb2fb5 different learning rate for different parts
Summary:
Adds the ability to have different learning rates for different parts of the model. The trainable parts of the implicitron have a new member

       param_groups: dictionary where keys are names of individual parameters,
            or module’s members and values are the parameter group where the
            parameter/member will be sorted to. "self" key is used to denote the
            parameter group at the module level. Possible keys, including the "self" key
            do not have to be defined. By default all parameters are put into "default"
            parameter group and have the learning rate defined in the optimizer,
            it can be overriden at the:
                - module level with “self” key, all the parameters and child
                    module s parameters will be put to that parameter group
                - member level, which is the same as if the `param_groups` in that
                    member has key=“self” and value equal to that parameter group.
                    This is useful if members do not have `param_groups`, for
                    example torch.nn.Linear.
                - parameter level, parameter with the same name as the key
                    will be put to that parameter group.

And in the optimizer factory, parameters and their learning rates are recursively gathered.

Reviewed By: shapovalov

Differential Revision: D40145802

fbshipit-source-id: 631c02b8d79ee1c0eb4c31e6e42dbd3d2882078a
2022-10-18 15:58:18 -07:00
Darijan Gudelj
c311a4cbb9 Enable mixed frame raysampling
Summary:
Changed ray_sampler and metrics to be able to use mixed frame raysampling.

Ray_sampler now has a new member which it passes to the pytorch3d raysampler.
If the raybundle is heterogeneous metrics now samples images by padding xys first. This reduces memory consumption.

Reviewed By: bottler, kjchalup

Differential Revision: D39542221

fbshipit-source-id: a6fec23838d3049ae5c2fd2e1f641c46c7c927e3
2022-10-03 08:36:47 -07:00
Jeremy Reizenstein
209c160a20 foreach optimizers
Summary: Allow using the new `foreach` option on optimizers.

Reviewed By: shapovalov

Differential Revision: D39694843

fbshipit-source-id: 97109c245b669bc6edff0f246893f95b7ae71f90
2022-09-22 05:11:56 -07:00
Jeremy Reizenstein
6e25fe8cb3 visualize_reconstruction fixes
Summary: Various fixes to get visualize_reconstruction running, and an interactive test for it.

Reviewed By: kjchalup

Differential Revision: D39286691

fbshipit-source-id: 88735034cc01736b24735bcb024577e6ab7ed336
2022-09-07 20:10:07 -07:00
Jeremy Reizenstein
90b758f725 hydra fix
Summary: Workaround for oddity with new hydra.

Reviewed By: davnov134

Differential Revision: D39280639

fbshipit-source-id: 76e91947f633589945446db93cf2dbc259642f8a
2022-09-07 03:25:39 -07:00
David Novotny
1163eaab43 CO3Dv2 trainer configs
Summary:
Adds yaml configs to train selected methods on CO3Dv2.

Few more updates:
1) moved some fields to base classes so that we can check is_multisequence in experiment.py
2) skip loading all train cameras for multisequence datasets, without this, co3d-fewview is untrainable
3) fix bug in json index dataset provider v2

Reviewed By: kjchalup

Differential Revision: D38952755

fbshipit-source-id: 3edac6fc8e20775aa70400bd73a0e6d52b091e0c
2022-08-30 13:42:19 -07:00
David Novotny
2ff2c7c836 Enable additional test-time source views for json dataset provider v2
Summary: Adds additional source views to the eval batches for evaluating many-view models on CO3D Challenge

Reviewed By: bottler

Differential Revision: D38705904

fbshipit-source-id: cf7d00dc7db926fbd1656dd97a729674e9ff5adb
2022-08-15 08:40:09 -07:00
Jeremy Reizenstein
a39cad40f4 LinearExponential LR
Summary: Linear followed by exponential LR progression. Needed for making Blender scenes converge.

Reviewed By: kjchalup

Differential Revision: D38557007

fbshipit-source-id: ad630dbc5b8fabcb33eeb5bdeed5e4f31360bac2
2022-08-09 18:18:46 -07:00
Krzysztof Chalupka
c83ec3555d Mods and bugfixes for LLFF and Blender repros
Summary:
LLFF (and most/all non-synth datasets) will have no background/foreground distinction. Add support for data with no fg mask.

Also, we had a bug in stats loading, like this:
  * Load stats
  * One of the stats has a history of length 0
  * That's fine, e.g. maybe it's fg_error but the dataset has no notion of fg/bg. So leave it as len 0
  * Check whether all the stats have the same history length as an arbitrarily chosen "reference-stat"
  * Ooops the reference-stat happened to be the stat with length 0
  * assert (legit_stat_len == reference_stat_len (=0)) ---> failed assert

Also some minor fixes (from Jeremy's other diff) to support LLFF

Reviewed By: davnov134

Differential Revision: D38475272

fbshipit-source-id: 5b35ac86d1d5239759f537621f41a3aa4eb3bd68
2022-08-09 15:04:44 -07:00
Jeremy Reizenstein
02c0254f7f more globalencoder followup
Summary: remove n_instances==0 special case, standardise args for GlobalEncoderBase's forward.

Reviewed By: shapovalov

Differential Revision: D37817340

fbshipit-source-id: 0aac5fbc7c336d09be9d412cffff5712bda27290
2022-08-05 03:33:30 -07:00
Jeremy Reizenstein
46e82efb4e clean IF args
Summary: continued - avoid duplicate inputs

Reviewed By: davnov134

Differential Revision: D38248827

fbshipit-source-id: 91ed398e304496a936f66e7a70ab3d189eeb5c70
2022-08-03 12:37:31 -07:00
Jeremy Reizenstein
078846d166 clean renderer args
Summary: continued - don't duplicate inputs

Reviewed By: kjchalup

Differential Revision: D38248829

fbshipit-source-id: 2d56418ecbec9cc597c3cf0c122199e274661516
2022-08-03 12:37:31 -07:00
Jeremy Reizenstein
f45893b845 clean raysampler args
Summary: Don't copy from one part of config to another, rather do the copy within GenericModel.

Reviewed By: davnov134

Differential Revision: D38248828

fbshipit-source-id: ff8af985c37ea1f7df9e0aa0a45a58df34c3f893
2022-08-03 12:37:31 -07:00
David Novotny
c3f8dad55c Move load_stats to TrainingLoop
Summary:
Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop.

Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc.

Reviewed By: kjchalup

Differential Revision: D38313475

fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
2022-08-02 15:40:53 -07:00
Krzysztof Chalupka
760305e044 Fix test evaluation for Blender data
Summary: Blender data doesn't have depths or crops.

Reviewed By: shapovalov

Differential Revision: D38345583

fbshipit-source-id: a19300daf666bbfd799d0038aeefa14641c559d7
2022-08-02 12:40:21 -07:00
Jeremy Reizenstein
3a063f5976 SimpleDataLoaderMapProvider
Summary: Simple DataLoaderMapProvider instance

Reviewed By: davnov134

Differential Revision: D38326719

fbshipit-source-id: 58556833e76fae5790d25a59bea0aac4ce046bf1
2022-08-02 12:10:05 -07:00
Jeremy Reizenstein
f8bf528043 remove get_task
Summary: Remove the dataset's need to provide the task type.

Reviewed By: davnov134, kjchalup

Differential Revision: D38314000

fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
2022-08-02 07:55:42 -07:00
Darijan Gudelj
37250a4326 Add forbidden fields to map_provider_v2
Summary:
Added _NEED_CONTROL
 to JsonIndexDatasetMapProviderV2 and made dataset_tweak_args use it.

Reviewed By: bottler

Differential Revision: D38313914

fbshipit-source-id: 529847571065dfba995b609a66737bd91e002cfe
2022-08-02 06:38:51 -07:00
Jeremy Reizenstein
3b7ab22d10 MeshRasterizerOpenGL import fixes
Summary: Only import it if you ask for it.

Reviewed By: kjchalup

Differential Revision: D38327167

fbshipit-source-id: 3f05231f26eda582a63afc71b669996342b0c6f9
2022-08-02 04:32:32 -07:00
David Novotny
80fc0ee0b6 Better seeding of random engines
Summary: Currently, seeds are set only inside the train loop. But this does not ensure that the model weights are initialized the same way everywhere which makes all experiments irreproducible. This diff fixes it.

Reviewed By: bottler

Differential Revision: D38315840

fbshipit-source-id: 3d2ecebbc36072c2b68dd3cd8c5e30708e7dd808
2022-08-01 10:03:09 -07:00
Jeremy Reizenstein
14bd5e28e8 provide cow dataset
Summary: Make a dummy single-scene dataset using the code from generate_cow_renders (used in existing NeRF tutorials)

Reviewed By: kjchalup

Differential Revision: D38116910

fbshipit-source-id: 8db6df7098aa221c81d392e5cd21b0e67f65bd70
2022-08-01 01:52:12 -07:00
Krzysztof Chalupka
1b0584f7bd Replace pluggable components to create a proper Configurable hierarchy.
Summary:
This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows:
```
Experiment
    data_source: ImplicitronDataSource
        dataset_map_provider
        data_loader_map_provider
    model_factory: ImplicitronModelFactory
        model: GenericModel
    optimizer_factory: ImplicitronOptimizerFactory
    training_loop: ImplicitronTrainingLoop
        evaluator: ImplicitronEvaluator
```

1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables.
2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop.
3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects.
4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step.
5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps.
6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user.

All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations.

In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment.

Reviewed By: bottler

Differential Revision: D37723227

fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27
2022-07-29 17:32:51 -07:00
Roman Shapovalov
36ba079bef Fixes to CO3Dv2 provider.
Summary:
1. Random sampling of num batches without replacement not supported.
2.Providers should implement the interface for the training loop to work.

Reviewed By: bottler, davnov134

Differential Revision: D37815388

fbshipit-source-id: 8a2795b524e733f07346ffdb20a9c0eb1a2b8190
2022-07-13 09:45:29 -07:00
Jeremy Reizenstein
d3b7f5f421 fix trainer test
Summary: After recent accelerate change D37543870 (aa8b03f31d), update interactive trainer test.

Reviewed By: shapovalov

Differential Revision: D37785932

fbshipit-source-id: 9211374323b6cfd80f6c5ff3a4fc1c0ca04b54ba
2022-07-12 07:20:21 -07:00
Nikhila Ravi
aa8b03f31d Updates to support Accelerate and multigpu training (#37)
Summary:
## Changes:
- Added Accelerate Library and refactored experiment.py to use it
- Needed to move `init_optimizer` and `ExperimentConfig` to a separate file to be compatible with submitit/hydra
- Needed to make some modifications to data loaders etc to work well with the accelerate ddp wrappers
- Loading/saving checkpoints incorporates an unwrapping step so remove the ddp wrapped model

## Tests

Tested with both `torchrun` and `submitit/hydra` on two gpus locally. Here are the commands:

**Torchrun**

Modules loaded:
```sh
1) anaconda3/2021.05   2) cuda/11.3   3) NCCL/2.9.8-3-cuda.11.3   4) gcc/5.2.0. (but unload gcc when using submit)
```

```sh
torchrun --nnodes=1 --nproc_per_node=2 experiment.py --config-path ./configs --config-name repro_singleseq_nerf_test
```

**Submitit/Hydra Local test**

```sh
~/pytorch3d/projects/implicitron_trainer$ HYDRA_FULL_ERROR=1 python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs  hydra/launcher=submitit_local hydra.launcher.gpus_per_node=2 hydra.launcher.tasks_per_node=2 hydra.launcher.nodes=1
```

**Submitit/Hydra distributed test**

```sh
~/implicitron/pytorch3d$ python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs  hydra/launcher=submitit_slurm hydra.launcher.gpus_per_node=8 hydra.launcher.tasks_per_node=8 hydra.launcher.nodes=1 hydra.launcher.partition=learnlab hydra.launcher.timeout_min=4320
```

## TODOS:
- Fix distributed evaluation: currently this doesn't work as the input format to the evaluation function is not suitable for gathering across gpus (needs to be nested list/tuple/dicts of objects that satisfy `is_torch_tensor`) and currently `frame_data`  contains `Cameras` type.
- Refactor the `accelerator` object to be accessible by all functions instead of needing to pass it around everywhere? Maybe have a `Trainer` class and add it as a method?
- Update readme with installation instructions for accelerate and also commands for running jobs with torchrun and submitit/hydra

X-link: https://github.com/fairinternal/pytorch3d/pull/37

Reviewed By: davnov134, kjchalup

Differential Revision: D37543870

Pulled By: bottler

fbshipit-source-id: be9eb4e91244d4fe3740d87dafec622ae1e0cf76
2022-07-11 19:29:58 -07:00
Jeremy Reizenstein
efb721320a extract camera_difficulty_bin_breaks
Summary: As part of removing Task, move camera difficulty bin breaks from hard code to the top level.

Reviewed By: davnov134

Differential Revision: D37491040

fbshipit-source-id: f2d6775ebc490f6f75020d13f37f6b588cc07a0b
2022-07-06 07:13:41 -07:00
Jeremy Reizenstein
771cf8a328 more padding options in Dataloader
Summary: Add facilities for dataloading non-sequential scenes.

Reviewed By: shapovalov

Differential Revision: D37291277

fbshipit-source-id: 0a33e3727b44c4f0cba3a2abe9b12f40d2a20447
2022-07-06 07:13:41 -07:00
David Novotny
0dce883241 Refactor autodecoders
Summary: Refactors autodecoders. Tests pass.

Reviewed By: bottler

Differential Revision: D37592429

fbshipit-source-id: 8f5c9eac254e1fdf0704d5ec5f69eb42f6225113
2022-07-04 07:18:03 -07:00
Krzysztof Chalupka
ae35824f21 Refactor ViewMetrics
Summary:
Make ViewMetrics easy to replace by putting them into an OmegaConf dataclass.

Also, re-word a few variable names and fix minor TODOs.

Reviewed By: bottler

Differential Revision: D37327157

fbshipit-source-id: 78d8e39bbb3548b952f10abbe05688409fb987cc
2022-06-30 09:22:01 -07:00
Jeremy Reizenstein
5c1ca757bb srn/idr followups
Summary: small followup to D37172537 (cba26506b6) and D37209012 (81d63c6382): changing default #harmonics and improving a test

Reviewed By: shapovalov

Differential Revision: D37412357

fbshipit-source-id: 1af1005a129425fd24fa6dd213d69c71632099a0
2022-06-24 04:07:15 -07:00
Jeremy Reizenstein
81d63c6382 idr harmonic_fns and doc
Summary: Document the inputs of idr functions and distinguish n_harmonic_functions to be 0 (simple embedding) versus -1 (no embedding).

Reviewed By: davnov134

Differential Revision: D37209012

fbshipit-source-id: 6e5c3eae54c4e5e8c3f76cad1caf162c6c222d52
2022-06-20 13:48:34 -07:00
Jeremy Reizenstein
cba26506b6 bg_color for lstm renderer
Summary: Allow specifying a color for non-opaque pixels in LSTMRenderer.

Reviewed By: davnov134

Differential Revision: D37172537

fbshipit-source-id: 6039726678bb7947f7d8cd04035b5023b2d5398c
2022-06-20 13:46:35 -07:00
Jeremy Reizenstein
65f667fd2e loading llff and blender datasets
Summary: Copy code from NeRF for loading LLFF data and blender synthetic data, and create dataset objects for them

Reviewed By: shapovalov

Differential Revision: D35581039

fbshipit-source-id: af7a6f3e9a42499700693381b5b147c991f57e5d
2022-06-16 03:09:15 -07:00
Jeremy Reizenstein
023a2369ae test configs are loadable
Summary: Add test that the yaml files deserialize.

Reviewed By: davnov134

Differential Revision: D36830673

fbshipit-source-id: b785d8db97b676686036760bfa2dd3fa638bda57
2022-06-10 12:22:46 -07:00
Jeremy Reizenstein
6275283202 pluggable JsonIndexDataset
Summary: Make dataset type and args configurable on JsonIndexDatasetMapProvider.

Reviewed By: davnov134

Differential Revision: D36666705

fbshipit-source-id: 4d0a3781d9a956504f51f1c7134c04edf1eb2846
2022-06-10 12:22:46 -07:00