tc
4.26 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
Misc Info:
---------
TC Verification Order: 1) tc_lod.v
2) tc_tilemem.v
3) tc_div.v
4) tc_adj.v
5) tc_frac.v
6) tc_adrs.v
7) tc_sort.v
Test Cases:
note: * signifies snapshot point
tc_lod.v
- clamp
if (lod > 0x7fff)
lod_clamp = 0x7fff
else
if (lod < 0) or (lod < min_lev)
lod_clamp = min_lev
else
lod_clamp = lod
- lod_index
lod_index = int(log2(lod_clamp))
- min
min = (lod_clamp < 1.0)
- max
max = (lod_clamp >= 256) or (lod_index >= max_level)
- index_clamp
index_clamp = max ? max_level : lod_index
* l_tile
l_tile = f(index_clamp, lod_en, load, prim_tile, inc, inc2, cycle)
(see truth table on page 6)
* lod_ge_one_7d
lod_ge_one = (lod_clamp >= 1.0)
- fraction
fraction = (lod_clamp / (2 ^ lod_index)) - 1.0
- fract_clamp
if (sharp_en || det_en)
fract_clamp = fraction
else
{
if (min) fract_clamp = 0x00;
if (max) fract_clamp = 0xff;
}
* l_frac_7d
l_frac = (min && sharp_en) ? -fract_clamp : fract_clamp
* load_3d
load_3d = load delayed 3 cycles
* load_4d
load_4d = load delayed 4 cycles
- cycle
should go low 1 clock after st_span and then toggle every cycle unless
ncyc == 0 in which case it should stay low
tc_tilemem.v
- write address 0-7 with unique attribute data
- write address 0-7 with unique size data
- read address 0-7 and verify do[94:0] with clock delays
- repeat with the ones' compliment of the data
tc_div.v
* sw_out, tw_out, rcp2, shft2
- verify disable = f(load, persp_en)
- verify negative w clamp
- verify shift out of recip block by walkng ones across w
- verify recip/multiply by walking through rom segments
- verify output shift
- verify over/underflow clamp
tc_adj.v
* s_tile
set sl = 0, disable divide => verify right shift for pos and neg s_w, verify that
load disables shift.
verify subtract with largest positive, largest negative, etc.
* tile_diff
vary sh and sl (sh always >= sl)
* s_adrs
disable mask, disable s_over => verify s_under clamp with (s_tile < 0).
verify that clamp is disabled based on mask_s, clamp_s, load, copy. verify
shift_coord for positive s_tile.
verify s_over clamp by varying s_shifted (large negative -> large positive).
verify that s_over is disabled based on mask_s, clamp_s, load, copy.
set s_clamped to 0x3ff, mir_s = 0, vary mask_s[3:0] from 1 to 9 => verify s_masked.
enable mir_s, set mask_s=4, set s_clamped[10:1] = 0x020, 0x000 => verify s_masked.
* mask_en_1d
vary mask_s and load
* wrap_bit_1d
* s_all_one[3:0], s_all_zero[3:0]
* s_frac_1d
should be s_tile[4:0]. verify that zeroed based on s_over, s_under, mask, clamp.
repeat for t
tc_frac.v
* s_frac_ba_2d, t_frac_ba_2d, swap_ba_1d
vary s_frac, t_frac and copy to toggle swap and confirm 2's comp of s_frac and t_frac
* s_frac_rg_2d, t_frac_rg_2d, swap_rg_1d
vary s_frac, t_frac, yuv type, copy and a[1] to toggle swap and confirm 2's comp's.
tc_adrs.v
* a[12:0]
verify s_nib for all sizes and types.
vary t, line, tmem_adrs, and s between smallest and largest values. always
positive. this should verify mult and adds. confirm that a[3] is killed by load.
* odd_t
lsb of t delayed one cycle
* b_adder, c_adder, d_adder
walk through truth table in notes
* shift
f(type==yuv, load)
* flip
f(adder mux selects)
* yuv_tex_1d
* load_5d
* copy_1d
* tex_type_1d
* tex_size_1d
tc_sort.v
* adrs_bnk(0L to 3H), adrs_a_1d to adrs_d_rg_1d
set (tlut_en = 0) and (vm_dv = 0)
vary a[12:0], b_adder, c_adder, d_adder to exercise adders.
vary vm_in and vm_dv to exercise high-half 2:1 mux.
following tests moved to tm verification :
confirm xor of [3] of addresses. also exercise address shift.
if not already covered, vary the short address bits of a,b,c,d to exercise
address sort muxes.
vary tlut_en, load to exercise high-half 2:1 address muxes.