[LLVMdev] 16 bit floats
Micah.Villmow at amd.com
Thu Feb 5 15:34:36 CST 2009
I need to do a similar where I convert the 16bit floats to 32bit floats
on memory operations for both scalar and vector formats. So can these
operations be implemented without adding 16 bit float support natively
to LLVM? If so, how?
From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu]
On Behalf Of Chris Lattner
Sent: Thursday, February 05, 2009 12:53 PM
To: LLVM Developers Mailing List
Subject: Re: [LLVMdev] 16 bit floats
On Feb 5, 2009, at 12:51 PM, BGB wrote:
----- Original Message -----
From: Villmow, Micah <mailto:Micah.Villmow at amd.com>
To: LLVM Developers Mailing List <mailto:llvmdev at cs.uiuc.edu>
Sent: Friday, February 06, 2009 5:47 AM
Subject: [LLVMdev] 16 bit floats
I need to support 16 bit floats for some operations, outside of
datatypes.td and the constants class, is there anything else I will need
to modify to add f16 support?
probably also code generation (can't give specifics, no real expert on
the LLVM codebase).
this would be because, even if the core typesystem knows of the type,
the codegen might not know how to emit operations on that type.
now, of note:
in my project (not LLVM based), float16 had not been supported directly
(since it is not known to the CPU), rather, some loader and saver thunks
were used which converted to/from float32 (this used as the 'internal'
representation of the type). in most cases, I would think this would be
faster than directly operating on the float16, since the CPU supports
float32, but float16 would have to be emulated.
(unless of course newer CPUs are adding native float16 support or
Right. Micah, does your CPU support float16 operations like add/sub etc
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the LLVMdev