[Tensor] apply ColoTensor on Torch functions (#821)

* Revert "[zero] add ZeroTensorShardStrategy (#793)"

This reverts commit 88759e289e.

* [gemini] set cpu memory capacity

* [log] local throughput collecting

* polish

* polish

* polish

* polish code

* polish

* polish code

* add a new tensor structure and override linear for it

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* [tensor] renaming and reorganize directory structure.

* rm useless dir

* polish

* polish

* [tensor] hander the function not wrapped
pull/824/head
Jiarui Fang 3 years ago committed by GitHub
parent 0ce8924ceb
commit 660d2d1f1b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -54,6 +54,13 @@ def test_element_wise():
assert allclose(torch.nn.functional.relu(t), torch.nn.functional.relu(t_ref)) assert allclose(torch.nn.functional.relu(t), torch.nn.functional.relu(t_ref))
# Test a function not wrapped by
def test_no_wrap_op():
t_ref = torch.randn(3, 5)
t = ColoTensor(t_ref.clone())
assert torch.sum(t) == torch.sum(t_ref)
if __name__ == '__main__': if __name__ == '__main__':
test_linear() test_no_wrap_op()
# test_element_wise() # test_element_wise()

Loading…
Cancel
Save