README.md 2.15 KB
Newer Older
1 2 3 4 5 6 7 8 9
AUTHOR
============
Rico Jonschkowski (rico.j@gmx.de)

Accompanying paper
------------
Jonschkowski, Rico, Clemens Eppner, Sebastian Höfer, Roberto Martín-Martín, and Oliver Brock. "[Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge](http://www.robotics.tu-berlin.de/fileadmin/fg170/Publikationen_pdf/Jonschkowski-16-IROS.pdf)." In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, IEEE, 2016.


10 11
INSTALLATION
============
12

13 14
Dependencies
------------
rico.j committed
15

16
* python
rico.j committed
17

18
* opencv -- if you don't have opencv, you can install it using:
19

20
	    sudo apt-get install python-opencv
21

22 23 24 25 26 27
* gco_python (https://github.com/amueller/gco_python)

        pip install --user pygco

* sklearn (>= 0.16)
* matplotlib (>= 1.5.0)
28 29 30 31 32 33

-------------
	
python setup.py install


34
USAGE
35
=====
rico.j committed
36

37 38
The following command reproduces the experiments and plots in the paper (modulo minor editing to improve the readability of the plots):

39 40 41 42 43 44
	python main.py


DATA
====

rico.j committed
45
This repository includes the source code and a preprocessed version of the data (in the data/cache/ directory). These preprocessed data consist of different .pkl files (which can be loaded with python's pickle module). Each of these files contains a set of data samples (see class APCDataSet in apc_data.py). Every data sample includes precomputed feature images that are cropped to only include the target bin and are annotated with masks for the different objects (see class APCSample in apc_data.py).
46 47

If you need access to the raw data (complete RGB-D images, feature images, and masks), e.g. because you want to extend this code or compare your own code against it, you can find the raw data there:
48 49 50 51 52

	https://owncloud.tu-berlin.de/public.php?service=files&t=709f973be5e5d18ef5aa2a0b3c83221f

To use the raw data instead of the cached data in the main script, copy the data into data/rbo_apc and in main.py use compute_datasets instead of load_datasets:

53
    if __name__ == "__main__":
54

55 56 57 58 59 60
        ... 
    
        datasets = compute_datasets(dataset_names, dataset_path, cache_path) # compute from raw data
        #datasets = load_datasets(dataset_names, dataset_path, cache_path) # load from cached data
    
        ...