Why does the machine running the Job Manager used by the Distributed Computing Toolbox have numerous large BlobStore files on disk?

1 view (last 30 days)
When using the Distributed Computing Toolbox, I noticed many large files with name such as BlobStore.001 on the job manager machine's filesystem. These may have caused job failures by filling the disk.

Accepted Answer

MathWorks Support Team
MathWorks Support Team on 27 Jun 2009
The Job Manager uses BlobStore files to cache JobData on its way to workers.
In order to permanently remove completed jobs (and their attached data) from the Job Manager's database,you should use the DESTROY function on the jobs. This should not be done until after output data has been retrieved from the job object.
If you are not destroying the jobs once they have completed as shown in the documentation examples, these BlobStores can accumulate; this is expected behavior. You can find more information about the DESTROY function on the following documentation page. The page can be viewed by entering the following command at the MATLAB prompt:
web([docroot,'/toolbox/distcomp/destroy.html'])
If you are submitting large amounts of JobData, your Job Manager must have sufficient disk space to store all data that has been submitted but not yet completed. If your submissions contain more JobData than your Job Manager machine can store, consider storing data on an accessable network drive, loading it directly to the workers, or increasing the storage of your Job Manager machine.

More Answers (0)

Categories

Find more on MATLAB Parallel Server in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!